00:00:00.001 Started by upstream project "autotest-per-patch" build number 130842 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.721 The recommended git tool is: git 00:00:01.721 using credential 00000000-0000-0000-0000-000000000002 00:00:01.725 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.738 Fetching changes from the remote Git repository 00:00:01.741 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.754 Using shallow fetch with depth 1 00:00:01.754 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.754 > git --version # timeout=10 00:00:01.766 > git --version # 'git version 2.39.2' 00:00:01.766 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.780 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.780 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.647 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.659 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.674 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:07.674 > git config core.sparsecheckout # timeout=10 00:00:07.686 > git read-tree -mu HEAD # timeout=10 00:00:07.706 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:07.731 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:07.731 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:07.828 [Pipeline] Start of Pipeline 00:00:07.839 [Pipeline] library 00:00:07.840 Loading library shm_lib@master 00:00:07.840 Library shm_lib@master is cached. Copying from home. 00:00:07.855 [Pipeline] node 00:00:07.864 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.865 [Pipeline] { 00:00:07.874 [Pipeline] catchError 00:00:07.875 [Pipeline] { 00:00:07.887 [Pipeline] wrap 00:00:07.895 [Pipeline] { 00:00:07.903 [Pipeline] stage 00:00:07.905 [Pipeline] { (Prologue) 00:00:08.086 [Pipeline] sh 00:00:08.478 + logger -p user.info -t JENKINS-CI 00:00:08.499 [Pipeline] echo 00:00:08.501 Node: GP8 00:00:08.508 [Pipeline] sh 00:00:08.820 [Pipeline] setCustomBuildProperty 00:00:08.830 [Pipeline] echo 00:00:08.831 Cleanup processes 00:00:08.836 [Pipeline] sh 00:00:09.121 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.121 1324031 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.133 [Pipeline] sh 00:00:09.418 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.418 ++ grep -v 'sudo pgrep' 00:00:09.418 ++ awk '{print $1}' 00:00:09.418 + sudo kill -9 00:00:09.418 + true 00:00:09.430 [Pipeline] cleanWs 00:00:09.437 [WS-CLEANUP] Deleting project workspace... 00:00:09.437 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.442 [WS-CLEANUP] done 00:00:09.446 [Pipeline] setCustomBuildProperty 00:00:09.460 [Pipeline] sh 00:00:09.738 + sudo git config --global --replace-all safe.directory '*' 00:00:09.803 [Pipeline] httpRequest 00:00:10.176 [Pipeline] echo 00:00:10.178 Sorcerer 10.211.164.101 is alive 00:00:10.185 [Pipeline] retry 00:00:10.187 [Pipeline] { 00:00:10.197 [Pipeline] httpRequest 00:00:10.201 HttpMethod: GET 00:00:10.201 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:10.202 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:10.225 Response Code: HTTP/1.1 200 OK 00:00:10.225 Success: Status code 200 is in the accepted range: 200,404 00:00:10.225 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:26.070 [Pipeline] } 00:00:26.089 [Pipeline] // retry 00:00:26.100 [Pipeline] sh 00:00:26.389 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:26.421 [Pipeline] httpRequest 00:00:26.746 [Pipeline] echo 00:00:26.748 Sorcerer 10.211.164.101 is alive 00:00:26.760 [Pipeline] retry 00:00:26.763 [Pipeline] { 00:00:26.778 [Pipeline] httpRequest 00:00:26.783 HttpMethod: GET 00:00:26.783 URL: http://10.211.164.101/packages/spdk_3d8f4fe535958a9bb1ad50a3ed57801f1b93011b.tar.gz 00:00:26.784 Sending request to url: http://10.211.164.101/packages/spdk_3d8f4fe535958a9bb1ad50a3ed57801f1b93011b.tar.gz 00:00:26.789 Response Code: HTTP/1.1 200 OK 00:00:26.789 Success: Status code 200 is in the accepted range: 200,404 00:00:26.790 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3d8f4fe535958a9bb1ad50a3ed57801f1b93011b.tar.gz 00:03:12.208 [Pipeline] } 00:03:12.227 [Pipeline] // retry 00:03:12.235 [Pipeline] sh 00:03:12.525 + tar --no-same-owner -xf spdk_3d8f4fe535958a9bb1ad50a3ed57801f1b93011b.tar.gz 00:03:17.812 [Pipeline] sh 00:03:18.092 + git -C spdk log --oneline -n5 00:03:18.092 3d8f4fe53 test/packaging: Zero out the rpath string 00:03:18.092 1b5ee3b10 test/packaging: Remove rpath workarounds in tests 00:03:18.092 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:18.092 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:03:18.092 82c46626a lib/event: implement scheduler trace events 00:03:18.104 [Pipeline] } 00:03:18.119 [Pipeline] // stage 00:03:18.131 [Pipeline] stage 00:03:18.135 [Pipeline] { (Prepare) 00:03:18.154 [Pipeline] writeFile 00:03:18.170 [Pipeline] sh 00:03:18.453 + logger -p user.info -t JENKINS-CI 00:03:18.466 [Pipeline] sh 00:03:18.749 + logger -p user.info -t JENKINS-CI 00:03:18.762 [Pipeline] sh 00:03:19.044 + cat autorun-spdk.conf 00:03:19.044 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.044 SPDK_TEST_NVMF=1 00:03:19.044 SPDK_TEST_NVME_CLI=1 00:03:19.044 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.044 SPDK_TEST_NVMF_NICS=e810 00:03:19.044 SPDK_TEST_VFIOUSER=1 00:03:19.044 SPDK_RUN_UBSAN=1 00:03:19.044 NET_TYPE=phy 00:03:19.051 RUN_NIGHTLY=0 00:03:19.057 [Pipeline] readFile 00:03:19.087 [Pipeline] withEnv 00:03:19.090 [Pipeline] { 00:03:19.106 [Pipeline] sh 00:03:19.390 + set -ex 00:03:19.390 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:19.390 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:19.390 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.390 ++ SPDK_TEST_NVMF=1 00:03:19.390 ++ SPDK_TEST_NVME_CLI=1 00:03:19.390 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.390 ++ SPDK_TEST_NVMF_NICS=e810 00:03:19.390 ++ SPDK_TEST_VFIOUSER=1 00:03:19.390 ++ SPDK_RUN_UBSAN=1 00:03:19.390 ++ NET_TYPE=phy 00:03:19.390 ++ RUN_NIGHTLY=0 00:03:19.390 + case $SPDK_TEST_NVMF_NICS in 00:03:19.390 + DRIVERS=ice 00:03:19.390 + [[ tcp == \r\d\m\a ]] 00:03:19.390 + [[ -n ice ]] 00:03:19.390 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:19.390 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:19.390 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:19.390 rmmod: ERROR: Module irdma is not currently loaded 00:03:19.390 rmmod: ERROR: Module i40iw is not currently loaded 00:03:19.390 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:19.390 + true 00:03:19.390 + for D in $DRIVERS 00:03:19.390 + sudo modprobe ice 00:03:19.390 + exit 0 00:03:19.399 [Pipeline] } 00:03:19.415 [Pipeline] // withEnv 00:03:19.421 [Pipeline] } 00:03:19.437 [Pipeline] // stage 00:03:19.449 [Pipeline] catchError 00:03:19.451 [Pipeline] { 00:03:19.465 [Pipeline] timeout 00:03:19.466 Timeout set to expire in 1 hr 0 min 00:03:19.468 [Pipeline] { 00:03:19.482 [Pipeline] stage 00:03:19.484 [Pipeline] { (Tests) 00:03:19.501 [Pipeline] sh 00:03:19.784 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:19.784 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:19.784 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:19.784 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:19.784 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:19.784 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:19.784 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:19.784 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:19.784 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:19.784 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:19.784 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:19.784 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:19.784 + source /etc/os-release 00:03:19.784 ++ NAME='Fedora Linux' 00:03:19.784 ++ VERSION='39 (Cloud Edition)' 00:03:19.784 ++ ID=fedora 00:03:19.784 ++ VERSION_ID=39 00:03:19.784 ++ VERSION_CODENAME= 00:03:19.784 ++ PLATFORM_ID=platform:f39 00:03:19.784 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:19.784 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:19.784 ++ LOGO=fedora-logo-icon 00:03:19.784 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:19.784 ++ HOME_URL=https://fedoraproject.org/ 00:03:19.784 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:19.784 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:19.784 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:19.784 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:19.784 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:19.784 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:19.784 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:19.784 ++ SUPPORT_END=2024-11-12 00:03:19.784 ++ VARIANT='Cloud Edition' 00:03:19.784 ++ VARIANT_ID=cloud 00:03:19.784 + uname -a 00:03:19.784 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:19.784 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.162 Hugepages 00:03:21.162 node hugesize free / total 00:03:21.162 node0 1048576kB 0 / 0 00:03:21.162 node0 2048kB 0 / 0 00:03:21.162 node1 1048576kB 0 / 0 00:03:21.162 node1 2048kB 0 / 0 00:03:21.162 00:03:21.162 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.162 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:21.162 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:21.162 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:21.421 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:21.421 + rm -f /tmp/spdk-ld-path 00:03:21.421 + source autorun-spdk.conf 00:03:21.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.421 ++ SPDK_TEST_NVMF=1 00:03:21.421 ++ SPDK_TEST_NVME_CLI=1 00:03:21.421 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.421 ++ SPDK_TEST_NVMF_NICS=e810 00:03:21.421 ++ SPDK_TEST_VFIOUSER=1 00:03:21.421 ++ SPDK_RUN_UBSAN=1 00:03:21.421 ++ NET_TYPE=phy 00:03:21.421 ++ RUN_NIGHTLY=0 00:03:21.421 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:21.421 + [[ -n '' ]] 00:03:21.421 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.421 + for M in /var/spdk/build-*-manifest.txt 00:03:21.421 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:21.421 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:21.421 + for M in /var/spdk/build-*-manifest.txt 00:03:21.421 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:21.421 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:21.421 + for M in /var/spdk/build-*-manifest.txt 00:03:21.421 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:21.421 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:21.421 ++ uname 00:03:21.421 + [[ Linux == \L\i\n\u\x ]] 00:03:21.421 + sudo dmesg -T 00:03:21.421 + sudo dmesg --clear 00:03:21.422 + dmesg_pid=1325863 00:03:21.422 + [[ Fedora Linux == FreeBSD ]] 00:03:21.422 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:21.422 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:21.422 + sudo dmesg -Tw 00:03:21.422 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:21.422 + [[ -x /usr/src/fio-static/fio ]] 00:03:21.422 + export FIO_BIN=/usr/src/fio-static/fio 00:03:21.422 + FIO_BIN=/usr/src/fio-static/fio 00:03:21.422 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:21.422 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:21.422 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:21.422 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:21.422 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:21.422 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:21.422 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:21.422 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:21.422 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:21.422 Test configuration: 00:03:21.422 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.422 SPDK_TEST_NVMF=1 00:03:21.422 SPDK_TEST_NVME_CLI=1 00:03:21.422 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.422 SPDK_TEST_NVMF_NICS=e810 00:03:21.422 SPDK_TEST_VFIOUSER=1 00:03:21.422 SPDK_RUN_UBSAN=1 00:03:21.422 NET_TYPE=phy 00:03:21.422 RUN_NIGHTLY=0 09:24:16 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:21.422 09:24:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:21.422 09:24:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:21.422 09:24:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:21.422 09:24:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.422 09:24:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.422 09:24:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.422 09:24:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.422 09:24:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.422 09:24:16 -- paths/export.sh@5 -- $ export PATH 00:03:21.422 09:24:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.422 09:24:16 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:21.422 09:24:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:21.422 09:24:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728285856.XXXXXX 00:03:21.422 09:24:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728285856.KRovJm 00:03:21.422 09:24:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:21.422 09:24:16 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:21.422 09:24:16 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:21.422 09:24:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:21.422 09:24:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:21.422 09:24:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:21.422 09:24:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:21.422 09:24:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.422 09:24:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:21.422 09:24:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:21.422 09:24:16 -- pm/common@17 -- $ local monitor 00:03:21.422 09:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.422 09:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.422 09:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.422 09:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.422 09:24:16 -- pm/common@21 -- $ date +%s 00:03:21.422 09:24:16 -- pm/common@25 -- $ sleep 1 00:03:21.422 09:24:16 -- pm/common@21 -- $ date +%s 00:03:21.422 09:24:16 -- pm/common@21 -- $ date +%s 00:03:21.422 09:24:16 -- pm/common@21 -- $ date +%s 00:03:21.422 09:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285856 00:03:21.422 09:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285856 00:03:21.422 09:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285856 00:03:21.422 09:24:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285856 00:03:21.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285856_collect-vmstat.pm.log 00:03:21.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285856_collect-cpu-load.pm.log 00:03:21.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285856_collect-cpu-temp.pm.log 00:03:21.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285856_collect-bmc-pm.bmc.pm.log 00:03:22.618 09:24:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:22.618 09:24:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:22.618 09:24:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:22.618 09:24:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.618 09:24:17 -- spdk/autobuild.sh@16 -- $ date -u 00:03:22.618 Mon Oct 7 07:24:17 AM UTC 2024 00:03:22.618 09:24:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:22.618 v25.01-pre-37-g3d8f4fe53 00:03:22.618 09:24:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:22.618 09:24:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:22.618 09:24:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:22.618 09:24:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:22.618 09:24:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:22.618 09:24:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.618 ************************************ 00:03:22.618 START TEST ubsan 00:03:22.618 ************************************ 00:03:22.618 09:24:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:22.618 using ubsan 00:03:22.618 00:03:22.618 real 0m0.000s 00:03:22.618 user 0m0.000s 00:03:22.618 sys 0m0.000s 00:03:22.618 09:24:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:22.618 09:24:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:22.618 ************************************ 00:03:22.618 END TEST ubsan 00:03:22.618 ************************************ 00:03:22.618 09:24:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:22.618 09:24:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:22.618 09:24:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:22.618 09:24:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:22.618 09:24:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:22.618 09:24:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:22.618 09:24:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:22.618 09:24:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:22.619 09:24:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:22.619 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:22.619 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:23.185 Using 'verbs' RDMA provider 00:03:39.093 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:51.289 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:51.548 Creating mk/config.mk...done. 00:03:51.548 Creating mk/cc.flags.mk...done. 00:03:51.548 Type 'make' to build. 00:03:51.548 09:24:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:51.548 09:24:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:51.548 09:24:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:51.548 09:24:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.548 ************************************ 00:03:51.548 START TEST make 00:03:51.548 ************************************ 00:03:51.548 09:24:46 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:52.117 make[1]: Nothing to be done for 'all'. 00:03:54.032 The Meson build system 00:03:54.032 Version: 1.5.0 00:03:54.032 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:54.032 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:54.032 Build type: native build 00:03:54.032 Project name: libvfio-user 00:03:54.032 Project version: 0.0.1 00:03:54.032 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:54.032 C linker for the host machine: cc ld.bfd 2.40-14 00:03:54.032 Host machine cpu family: x86_64 00:03:54.032 Host machine cpu: x86_64 00:03:54.032 Run-time dependency threads found: YES 00:03:54.032 Library dl found: YES 00:03:54.032 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:54.032 Run-time dependency json-c found: YES 0.17 00:03:54.032 Run-time dependency cmocka found: YES 1.1.7 00:03:54.032 Program pytest-3 found: NO 00:03:54.032 Program flake8 found: NO 00:03:54.032 Program misspell-fixer found: NO 00:03:54.032 Program restructuredtext-lint found: NO 00:03:54.032 Program valgrind found: YES (/usr/bin/valgrind) 00:03:54.032 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:54.032 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:54.032 Compiler for C supports arguments -Wwrite-strings: YES 00:03:54.032 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:54.032 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:54.032 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:54.032 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:54.032 Build targets in project: 8 00:03:54.032 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:54.032 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:54.032 00:03:54.032 libvfio-user 0.0.1 00:03:54.032 00:03:54.032 User defined options 00:03:54.032 buildtype : debug 00:03:54.032 default_library: shared 00:03:54.032 libdir : /usr/local/lib 00:03:54.032 00:03:54.032 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:54.605 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:54.869 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:54.869 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:54.869 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:54.869 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:54.869 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:54.869 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:54.869 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:54.869 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:54.869 [9/37] Compiling C object samples/null.p/null.c.o 00:03:54.869 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:54.869 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:54.869 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:54.869 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:54.869 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:54.869 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:54.869 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:54.869 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:54.869 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:55.130 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:55.130 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:55.130 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:55.130 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:55.130 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:55.130 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:55.130 [25/37] Compiling C object samples/server.p/server.c.o 00:03:55.130 [26/37] Compiling C object samples/client.p/client.c.o 00:03:55.130 [27/37] Linking target samples/client 00:03:55.130 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:55.130 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:55.391 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:55.391 [31/37] Linking target test/unit_tests 00:03:55.391 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:55.655 [33/37] Linking target samples/server 00:03:55.655 [34/37] Linking target samples/null 00:03:55.655 [35/37] Linking target samples/gpio-pci-idio-16 00:03:55.655 [36/37] Linking target samples/lspci 00:03:55.655 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:55.655 INFO: autodetecting backend as ninja 00:03:55.655 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:55.655 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:56.594 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:56.594 ninja: no work to do. 00:04:03.150 The Meson build system 00:04:03.150 Version: 1.5.0 00:04:03.150 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:03.150 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:03.150 Build type: native build 00:04:03.150 Program cat found: YES (/usr/bin/cat) 00:04:03.150 Project name: DPDK 00:04:03.150 Project version: 24.03.0 00:04:03.150 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:03.150 C linker for the host machine: cc ld.bfd 2.40-14 00:04:03.150 Host machine cpu family: x86_64 00:04:03.150 Host machine cpu: x86_64 00:04:03.150 Message: ## Building in Developer Mode ## 00:04:03.150 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:03.150 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:03.150 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:03.150 Program python3 found: YES (/usr/bin/python3) 00:04:03.150 Program cat found: YES (/usr/bin/cat) 00:04:03.150 Compiler for C supports arguments -march=native: YES 00:04:03.150 Checking for size of "void *" : 8 00:04:03.150 Checking for size of "void *" : 8 (cached) 00:04:03.150 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:03.151 Library m found: YES 00:04:03.151 Library numa found: YES 00:04:03.151 Has header "numaif.h" : YES 00:04:03.151 Library fdt found: NO 00:04:03.151 Library execinfo found: NO 00:04:03.151 Has header "execinfo.h" : YES 00:04:03.151 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:03.151 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:03.151 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:03.151 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:03.151 Run-time dependency openssl found: YES 3.1.1 00:04:03.151 Run-time dependency libpcap found: YES 1.10.4 00:04:03.151 Has header "pcap.h" with dependency libpcap: YES 00:04:03.151 Compiler for C supports arguments -Wcast-qual: YES 00:04:03.151 Compiler for C supports arguments -Wdeprecated: YES 00:04:03.151 Compiler for C supports arguments -Wformat: YES 00:04:03.151 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:03.151 Compiler for C supports arguments -Wformat-security: NO 00:04:03.151 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:03.151 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:03.151 Compiler for C supports arguments -Wnested-externs: YES 00:04:03.151 Compiler for C supports arguments -Wold-style-definition: YES 00:04:03.151 Compiler for C supports arguments -Wpointer-arith: YES 00:04:03.151 Compiler for C supports arguments -Wsign-compare: YES 00:04:03.151 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:03.151 Compiler for C supports arguments -Wundef: YES 00:04:03.151 Compiler for C supports arguments -Wwrite-strings: YES 00:04:03.151 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:03.151 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:03.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:03.151 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:03.151 Program objdump found: YES (/usr/bin/objdump) 00:04:03.151 Compiler for C supports arguments -mavx512f: YES 00:04:03.151 Checking if "AVX512 checking" compiles: YES 00:04:03.151 Fetching value of define "__SSE4_2__" : 1 00:04:03.151 Fetching value of define "__AES__" : 1 00:04:03.151 Fetching value of define "__AVX__" : 1 00:04:03.151 Fetching value of define "__AVX2__" : (undefined) 00:04:03.151 Fetching value of define "__AVX512BW__" : (undefined) 00:04:03.151 Fetching value of define "__AVX512CD__" : (undefined) 00:04:03.151 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:03.151 Fetching value of define "__AVX512F__" : (undefined) 00:04:03.151 Fetching value of define "__AVX512VL__" : (undefined) 00:04:03.151 Fetching value of define "__PCLMUL__" : 1 00:04:03.151 Fetching value of define "__RDRND__" : 1 00:04:03.151 Fetching value of define "__RDSEED__" : (undefined) 00:04:03.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:03.151 Fetching value of define "__znver1__" : (undefined) 00:04:03.151 Fetching value of define "__znver2__" : (undefined) 00:04:03.151 Fetching value of define "__znver3__" : (undefined) 00:04:03.151 Fetching value of define "__znver4__" : (undefined) 00:04:03.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:03.151 Message: lib/log: Defining dependency "log" 00:04:03.151 Message: lib/kvargs: Defining dependency "kvargs" 00:04:03.151 Message: lib/telemetry: Defining dependency "telemetry" 00:04:03.151 Checking for function "getentropy" : NO 00:04:03.151 Message: lib/eal: Defining dependency "eal" 00:04:03.151 Message: lib/ring: Defining dependency "ring" 00:04:03.151 Message: lib/rcu: Defining dependency "rcu" 00:04:03.151 Message: lib/mempool: Defining dependency "mempool" 00:04:03.151 Message: lib/mbuf: Defining dependency "mbuf" 00:04:03.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:03.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:03.151 Compiler for C supports arguments -mpclmul: YES 00:04:03.151 Compiler for C supports arguments -maes: YES 00:04:03.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:03.151 Compiler for C supports arguments -mavx512bw: YES 00:04:03.151 Compiler for C supports arguments -mavx512dq: YES 00:04:03.151 Compiler for C supports arguments -mavx512vl: YES 00:04:03.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:03.151 Compiler for C supports arguments -mavx2: YES 00:04:03.151 Compiler for C supports arguments -mavx: YES 00:04:03.151 Message: lib/net: Defining dependency "net" 00:04:03.151 Message: lib/meter: Defining dependency "meter" 00:04:03.151 Message: lib/ethdev: Defining dependency "ethdev" 00:04:03.151 Message: lib/pci: Defining dependency "pci" 00:04:03.151 Message: lib/cmdline: Defining dependency "cmdline" 00:04:03.151 Message: lib/hash: Defining dependency "hash" 00:04:03.151 Message: lib/timer: Defining dependency "timer" 00:04:03.151 Message: lib/compressdev: Defining dependency "compressdev" 00:04:03.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:03.151 Message: lib/dmadev: Defining dependency "dmadev" 00:04:03.151 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:03.151 Message: lib/power: Defining dependency "power" 00:04:03.151 Message: lib/reorder: Defining dependency "reorder" 00:04:03.151 Message: lib/security: Defining dependency "security" 00:04:03.151 Has header "linux/userfaultfd.h" : YES 00:04:03.151 Has header "linux/vduse.h" : YES 00:04:03.151 Message: lib/vhost: Defining dependency "vhost" 00:04:03.151 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:03.151 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:03.151 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:03.151 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:03.151 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:03.151 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:03.151 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:03.151 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:03.151 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:03.151 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:03.151 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:03.151 Configuring doxy-api-html.conf using configuration 00:04:03.151 Configuring doxy-api-man.conf using configuration 00:04:03.151 Program mandb found: YES (/usr/bin/mandb) 00:04:03.151 Program sphinx-build found: NO 00:04:03.151 Configuring rte_build_config.h using configuration 00:04:03.151 Message: 00:04:03.151 ================= 00:04:03.151 Applications Enabled 00:04:03.151 ================= 00:04:03.151 00:04:03.151 apps: 00:04:03.151 00:04:03.151 00:04:03.151 Message: 00:04:03.151 ================= 00:04:03.151 Libraries Enabled 00:04:03.151 ================= 00:04:03.151 00:04:03.151 libs: 00:04:03.151 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:03.151 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:03.151 cryptodev, dmadev, power, reorder, security, vhost, 00:04:03.151 00:04:03.151 Message: 00:04:03.151 =============== 00:04:03.151 Drivers Enabled 00:04:03.151 =============== 00:04:03.151 00:04:03.151 common: 00:04:03.151 00:04:03.151 bus: 00:04:03.151 pci, vdev, 00:04:03.151 mempool: 00:04:03.151 ring, 00:04:03.151 dma: 00:04:03.151 00:04:03.151 net: 00:04:03.151 00:04:03.151 crypto: 00:04:03.151 00:04:03.151 compress: 00:04:03.151 00:04:03.151 vdpa: 00:04:03.151 00:04:03.151 00:04:03.151 Message: 00:04:03.151 ================= 00:04:03.151 Content Skipped 00:04:03.151 ================= 00:04:03.151 00:04:03.151 apps: 00:04:03.151 dumpcap: explicitly disabled via build config 00:04:03.151 graph: explicitly disabled via build config 00:04:03.151 pdump: explicitly disabled via build config 00:04:03.151 proc-info: explicitly disabled via build config 00:04:03.151 test-acl: explicitly disabled via build config 00:04:03.151 test-bbdev: explicitly disabled via build config 00:04:03.151 test-cmdline: explicitly disabled via build config 00:04:03.151 test-compress-perf: explicitly disabled via build config 00:04:03.151 test-crypto-perf: explicitly disabled via build config 00:04:03.151 test-dma-perf: explicitly disabled via build config 00:04:03.151 test-eventdev: explicitly disabled via build config 00:04:03.151 test-fib: explicitly disabled via build config 00:04:03.151 test-flow-perf: explicitly disabled via build config 00:04:03.151 test-gpudev: explicitly disabled via build config 00:04:03.151 test-mldev: explicitly disabled via build config 00:04:03.151 test-pipeline: explicitly disabled via build config 00:04:03.151 test-pmd: explicitly disabled via build config 00:04:03.151 test-regex: explicitly disabled via build config 00:04:03.152 test-sad: explicitly disabled via build config 00:04:03.152 test-security-perf: explicitly disabled via build config 00:04:03.152 00:04:03.152 libs: 00:04:03.152 argparse: explicitly disabled via build config 00:04:03.152 metrics: explicitly disabled via build config 00:04:03.152 acl: explicitly disabled via build config 00:04:03.152 bbdev: explicitly disabled via build config 00:04:03.152 bitratestats: explicitly disabled via build config 00:04:03.152 bpf: explicitly disabled via build config 00:04:03.152 cfgfile: explicitly disabled via build config 00:04:03.152 distributor: explicitly disabled via build config 00:04:03.152 efd: explicitly disabled via build config 00:04:03.152 eventdev: explicitly disabled via build config 00:04:03.152 dispatcher: explicitly disabled via build config 00:04:03.152 gpudev: explicitly disabled via build config 00:04:03.152 gro: explicitly disabled via build config 00:04:03.152 gso: explicitly disabled via build config 00:04:03.152 ip_frag: explicitly disabled via build config 00:04:03.152 jobstats: explicitly disabled via build config 00:04:03.152 latencystats: explicitly disabled via build config 00:04:03.152 lpm: explicitly disabled via build config 00:04:03.152 member: explicitly disabled via build config 00:04:03.152 pcapng: explicitly disabled via build config 00:04:03.152 rawdev: explicitly disabled via build config 00:04:03.152 regexdev: explicitly disabled via build config 00:04:03.152 mldev: explicitly disabled via build config 00:04:03.152 rib: explicitly disabled via build config 00:04:03.152 sched: explicitly disabled via build config 00:04:03.152 stack: explicitly disabled via build config 00:04:03.152 ipsec: explicitly disabled via build config 00:04:03.152 pdcp: explicitly disabled via build config 00:04:03.152 fib: explicitly disabled via build config 00:04:03.152 port: explicitly disabled via build config 00:04:03.152 pdump: explicitly disabled via build config 00:04:03.152 table: explicitly disabled via build config 00:04:03.152 pipeline: explicitly disabled via build config 00:04:03.152 graph: explicitly disabled via build config 00:04:03.152 node: explicitly disabled via build config 00:04:03.152 00:04:03.152 drivers: 00:04:03.152 common/cpt: not in enabled drivers build config 00:04:03.152 common/dpaax: not in enabled drivers build config 00:04:03.152 common/iavf: not in enabled drivers build config 00:04:03.152 common/idpf: not in enabled drivers build config 00:04:03.152 common/ionic: not in enabled drivers build config 00:04:03.152 common/mvep: not in enabled drivers build config 00:04:03.152 common/octeontx: not in enabled drivers build config 00:04:03.152 bus/auxiliary: not in enabled drivers build config 00:04:03.152 bus/cdx: not in enabled drivers build config 00:04:03.152 bus/dpaa: not in enabled drivers build config 00:04:03.152 bus/fslmc: not in enabled drivers build config 00:04:03.152 bus/ifpga: not in enabled drivers build config 00:04:03.152 bus/platform: not in enabled drivers build config 00:04:03.152 bus/uacce: not in enabled drivers build config 00:04:03.152 bus/vmbus: not in enabled drivers build config 00:04:03.152 common/cnxk: not in enabled drivers build config 00:04:03.152 common/mlx5: not in enabled drivers build config 00:04:03.152 common/nfp: not in enabled drivers build config 00:04:03.152 common/nitrox: not in enabled drivers build config 00:04:03.152 common/qat: not in enabled drivers build config 00:04:03.152 common/sfc_efx: not in enabled drivers build config 00:04:03.152 mempool/bucket: not in enabled drivers build config 00:04:03.152 mempool/cnxk: not in enabled drivers build config 00:04:03.152 mempool/dpaa: not in enabled drivers build config 00:04:03.152 mempool/dpaa2: not in enabled drivers build config 00:04:03.152 mempool/octeontx: not in enabled drivers build config 00:04:03.152 mempool/stack: not in enabled drivers build config 00:04:03.152 dma/cnxk: not in enabled drivers build config 00:04:03.152 dma/dpaa: not in enabled drivers build config 00:04:03.152 dma/dpaa2: not in enabled drivers build config 00:04:03.152 dma/hisilicon: not in enabled drivers build config 00:04:03.152 dma/idxd: not in enabled drivers build config 00:04:03.152 dma/ioat: not in enabled drivers build config 00:04:03.152 dma/skeleton: not in enabled drivers build config 00:04:03.152 net/af_packet: not in enabled drivers build config 00:04:03.152 net/af_xdp: not in enabled drivers build config 00:04:03.152 net/ark: not in enabled drivers build config 00:04:03.152 net/atlantic: not in enabled drivers build config 00:04:03.152 net/avp: not in enabled drivers build config 00:04:03.152 net/axgbe: not in enabled drivers build config 00:04:03.152 net/bnx2x: not in enabled drivers build config 00:04:03.152 net/bnxt: not in enabled drivers build config 00:04:03.152 net/bonding: not in enabled drivers build config 00:04:03.152 net/cnxk: not in enabled drivers build config 00:04:03.152 net/cpfl: not in enabled drivers build config 00:04:03.152 net/cxgbe: not in enabled drivers build config 00:04:03.152 net/dpaa: not in enabled drivers build config 00:04:03.152 net/dpaa2: not in enabled drivers build config 00:04:03.152 net/e1000: not in enabled drivers build config 00:04:03.152 net/ena: not in enabled drivers build config 00:04:03.152 net/enetc: not in enabled drivers build config 00:04:03.152 net/enetfec: not in enabled drivers build config 00:04:03.152 net/enic: not in enabled drivers build config 00:04:03.152 net/failsafe: not in enabled drivers build config 00:04:03.152 net/fm10k: not in enabled drivers build config 00:04:03.152 net/gve: not in enabled drivers build config 00:04:03.152 net/hinic: not in enabled drivers build config 00:04:03.152 net/hns3: not in enabled drivers build config 00:04:03.152 net/i40e: not in enabled drivers build config 00:04:03.152 net/iavf: not in enabled drivers build config 00:04:03.152 net/ice: not in enabled drivers build config 00:04:03.152 net/idpf: not in enabled drivers build config 00:04:03.152 net/igc: not in enabled drivers build config 00:04:03.152 net/ionic: not in enabled drivers build config 00:04:03.152 net/ipn3ke: not in enabled drivers build config 00:04:03.152 net/ixgbe: not in enabled drivers build config 00:04:03.152 net/mana: not in enabled drivers build config 00:04:03.152 net/memif: not in enabled drivers build config 00:04:03.152 net/mlx4: not in enabled drivers build config 00:04:03.152 net/mlx5: not in enabled drivers build config 00:04:03.152 net/mvneta: not in enabled drivers build config 00:04:03.152 net/mvpp2: not in enabled drivers build config 00:04:03.152 net/netvsc: not in enabled drivers build config 00:04:03.152 net/nfb: not in enabled drivers build config 00:04:03.152 net/nfp: not in enabled drivers build config 00:04:03.152 net/ngbe: not in enabled drivers build config 00:04:03.152 net/null: not in enabled drivers build config 00:04:03.152 net/octeontx: not in enabled drivers build config 00:04:03.152 net/octeon_ep: not in enabled drivers build config 00:04:03.152 net/pcap: not in enabled drivers build config 00:04:03.152 net/pfe: not in enabled drivers build config 00:04:03.152 net/qede: not in enabled drivers build config 00:04:03.152 net/ring: not in enabled drivers build config 00:04:03.152 net/sfc: not in enabled drivers build config 00:04:03.152 net/softnic: not in enabled drivers build config 00:04:03.152 net/tap: not in enabled drivers build config 00:04:03.152 net/thunderx: not in enabled drivers build config 00:04:03.152 net/txgbe: not in enabled drivers build config 00:04:03.152 net/vdev_netvsc: not in enabled drivers build config 00:04:03.152 net/vhost: not in enabled drivers build config 00:04:03.152 net/virtio: not in enabled drivers build config 00:04:03.152 net/vmxnet3: not in enabled drivers build config 00:04:03.152 raw/*: missing internal dependency, "rawdev" 00:04:03.152 crypto/armv8: not in enabled drivers build config 00:04:03.152 crypto/bcmfs: not in enabled drivers build config 00:04:03.152 crypto/caam_jr: not in enabled drivers build config 00:04:03.152 crypto/ccp: not in enabled drivers build config 00:04:03.152 crypto/cnxk: not in enabled drivers build config 00:04:03.152 crypto/dpaa_sec: not in enabled drivers build config 00:04:03.152 crypto/dpaa2_sec: not in enabled drivers build config 00:04:03.152 crypto/ipsec_mb: not in enabled drivers build config 00:04:03.152 crypto/mlx5: not in enabled drivers build config 00:04:03.152 crypto/mvsam: not in enabled drivers build config 00:04:03.152 crypto/nitrox: not in enabled drivers build config 00:04:03.152 crypto/null: not in enabled drivers build config 00:04:03.152 crypto/octeontx: not in enabled drivers build config 00:04:03.152 crypto/openssl: not in enabled drivers build config 00:04:03.152 crypto/scheduler: not in enabled drivers build config 00:04:03.152 crypto/uadk: not in enabled drivers build config 00:04:03.152 crypto/virtio: not in enabled drivers build config 00:04:03.152 compress/isal: not in enabled drivers build config 00:04:03.152 compress/mlx5: not in enabled drivers build config 00:04:03.152 compress/nitrox: not in enabled drivers build config 00:04:03.152 compress/octeontx: not in enabled drivers build config 00:04:03.152 compress/zlib: not in enabled drivers build config 00:04:03.152 regex/*: missing internal dependency, "regexdev" 00:04:03.152 ml/*: missing internal dependency, "mldev" 00:04:03.152 vdpa/ifc: not in enabled drivers build config 00:04:03.152 vdpa/mlx5: not in enabled drivers build config 00:04:03.152 vdpa/nfp: not in enabled drivers build config 00:04:03.152 vdpa/sfc: not in enabled drivers build config 00:04:03.153 event/*: missing internal dependency, "eventdev" 00:04:03.153 baseband/*: missing internal dependency, "bbdev" 00:04:03.153 gpu/*: missing internal dependency, "gpudev" 00:04:03.153 00:04:03.153 00:04:03.411 Build targets in project: 85 00:04:03.411 00:04:03.411 DPDK 24.03.0 00:04:03.411 00:04:03.411 User defined options 00:04:03.411 buildtype : debug 00:04:03.411 default_library : shared 00:04:03.411 libdir : lib 00:04:03.411 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:03.411 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:03.411 c_link_args : 00:04:03.411 cpu_instruction_set: native 00:04:03.411 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:04:03.411 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:04:03.411 enable_docs : false 00:04:03.411 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:03.411 enable_kmods : false 00:04:03.411 max_lcores : 128 00:04:03.411 tests : false 00:04:03.411 00:04:03.411 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:04.353 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:04.353 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:04.353 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:04.353 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:04.353 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:04.353 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:04.353 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:04.353 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:04.353 [8/268] Linking static target lib/librte_kvargs.a 00:04:04.353 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:04.353 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:04.353 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:04.353 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:04.353 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:04.353 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:04.353 [15/268] Linking static target lib/librte_log.a 00:04:04.615 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:05.186 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.186 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:05.186 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:05.186 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:05.186 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:05.186 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:05.186 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:05.186 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:05.186 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:05.186 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:05.186 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:05.186 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:05.186 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:05.186 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:05.186 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:05.186 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:05.186 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:05.186 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:05.186 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:05.186 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:05.186 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:05.186 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:05.186 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:05.448 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:05.448 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:05.448 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:05.448 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:05.448 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:05.448 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:05.448 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:05.448 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:05.448 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:05.448 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:05.448 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:05.448 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:05.448 [52/268] Linking static target lib/librte_telemetry.a 00:04:05.448 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:05.448 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:05.448 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:05.448 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:05.448 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:05.448 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:05.448 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:05.448 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:05.448 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:05.448 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:05.448 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:05.709 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:05.709 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:05.709 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.709 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:05.709 [68/268] Linking static target lib/librte_pci.a 00:04:05.709 [69/268] Linking target lib/librte_log.so.24.1 00:04:05.971 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:05.971 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:05.971 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:05.971 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:06.230 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:06.230 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:06.230 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:06.230 [77/268] Linking target lib/librte_kvargs.so.24.1 00:04:06.230 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:06.230 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:06.230 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:06.230 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:06.230 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:06.230 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:06.230 [84/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.230 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:06.230 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:06.230 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:06.230 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:06.230 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:06.230 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:06.230 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:06.230 [92/268] Linking static target lib/librte_ring.a 00:04:06.230 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:06.230 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:06.230 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:06.492 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:06.492 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:06.492 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:06.492 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:06.492 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:06.492 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:06.492 [102/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:06.492 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:06.492 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.492 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:06.492 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:06.492 [107/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:06.492 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:06.492 [109/268] Linking target lib/librte_telemetry.so.24.1 00:04:06.492 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:06.492 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:06.492 [112/268] Linking static target lib/librte_eal.a 00:04:06.492 [113/268] Linking static target lib/librte_rcu.a 00:04:06.492 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:06.492 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:06.492 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:06.492 [117/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:06.492 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:06.492 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:06.492 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:06.753 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:06.753 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:06.753 [123/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:06.753 [124/268] Linking static target lib/librte_meter.a 00:04:06.753 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:06.753 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:06.753 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:06.753 [128/268] Linking static target lib/librte_mempool.a 00:04:06.753 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:06.753 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:06.753 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:06.753 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:07.014 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:07.014 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:07.014 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:07.014 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:07.014 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.014 [138/268] Linking static target lib/librte_net.a 00:04:07.014 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:07.014 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:07.014 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:07.273 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:07.273 [143/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.273 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.273 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:07.273 [146/268] Linking static target lib/librte_cmdline.a 00:04:07.273 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:07.273 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:07.273 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:07.273 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:07.273 [151/268] Linking static target lib/librte_timer.a 00:04:07.273 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:07.273 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:07.531 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:07.531 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:07.531 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.531 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:07.531 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:07.531 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:07.531 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:07.531 [161/268] Linking static target lib/librte_dmadev.a 00:04:07.531 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:07.531 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:07.789 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:07.790 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:07.790 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:07.790 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:07.790 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:07.790 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:07.790 [170/268] Linking static target lib/librte_power.a 00:04:07.790 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:07.790 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:07.790 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.790 [174/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.790 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:07.790 [176/268] Linking static target lib/librte_hash.a 00:04:07.790 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:07.790 [178/268] Linking static target lib/librte_compressdev.a 00:04:07.790 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:08.051 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:08.051 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:08.051 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:08.051 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:08.051 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:08.051 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:08.051 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:08.051 [187/268] Linking static target lib/librte_reorder.a 00:04:08.051 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:08.051 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.051 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:08.051 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:08.051 [192/268] Linking static target lib/librte_mbuf.a 00:04:08.051 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:08.051 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:08.310 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:08.310 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.310 [197/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:08.310 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:08.310 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:08.310 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:08.310 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:08.310 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:08.310 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:08.310 [204/268] Linking static target lib/librte_security.a 00:04:08.310 [205/268] Linking static target drivers/librte_bus_vdev.a 00:04:08.310 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.310 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.310 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.568 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:08.568 [210/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.568 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:08.568 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:08.568 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:08.568 [214/268] Linking static target drivers/librte_bus_pci.a 00:04:08.568 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:08.568 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:08.568 [217/268] Linking static target drivers/librte_mempool_ring.a 00:04:08.568 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.568 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.568 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:08.568 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:08.568 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.826 [223/268] Linking static target lib/librte_ethdev.a 00:04:08.826 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:08.826 [225/268] Linking static target lib/librte_cryptodev.a 00:04:09.083 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.016 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.416 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:13.314 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.314 [230/268] Linking target lib/librte_eal.so.24.1 00:04:13.573 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:13.573 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.573 [233/268] Linking target lib/librte_meter.so.24.1 00:04:13.573 [234/268] Linking target lib/librte_pci.so.24.1 00:04:13.573 [235/268] Linking target lib/librte_timer.so.24.1 00:04:13.573 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:13.573 [237/268] Linking target lib/librte_ring.so.24.1 00:04:13.573 [238/268] Linking target lib/librte_dmadev.so.24.1 00:04:13.573 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:13.573 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:13.573 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:13.573 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:13.573 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:13.830 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:13.830 [245/268] Linking target lib/librte_rcu.so.24.1 00:04:13.830 [246/268] Linking target lib/librte_mempool.so.24.1 00:04:13.830 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:13.830 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:13.830 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:13.830 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:14.088 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:14.088 [252/268] Linking target lib/librte_net.so.24.1 00:04:14.088 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:14.088 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:14.088 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:14.088 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:14.088 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:14.366 [258/268] Linking target lib/librte_security.so.24.1 00:04:14.366 [259/268] Linking target lib/librte_hash.so.24.1 00:04:14.366 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:14.366 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:14.366 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:14.366 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:14.624 [264/268] Linking target lib/librte_power.so.24.1 00:04:17.907 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:18.164 [266/268] Linking static target lib/librte_vhost.a 00:04:19.535 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.535 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:19.535 INFO: autodetecting backend as ninja 00:04:19.535 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:05:06.258 CC lib/ut_mock/mock.o 00:05:06.258 CC lib/ut/ut.o 00:05:06.258 CC lib/log/log.o 00:05:06.258 CC lib/log/log_flags.o 00:05:06.258 CC lib/log/log_deprecated.o 00:05:06.258 LIB libspdk_ut_mock.a 00:05:06.258 LIB libspdk_log.a 00:05:06.258 LIB libspdk_ut.a 00:05:06.258 SO libspdk_ut_mock.so.6.0 00:05:06.258 SO libspdk_log.so.7.0 00:05:06.258 SO libspdk_ut.so.2.0 00:05:06.258 SYMLINK libspdk_ut_mock.so 00:05:06.258 SYMLINK libspdk_ut.so 00:05:06.258 SYMLINK libspdk_log.so 00:05:06.258 CC lib/util/base64.o 00:05:06.258 CC lib/util/bit_array.o 00:05:06.258 CC lib/util/cpuset.o 00:05:06.258 CC lib/util/crc32.o 00:05:06.258 CC lib/util/crc16.o 00:05:06.258 CC lib/util/crc32c.o 00:05:06.258 CXX lib/trace_parser/trace.o 00:05:06.258 CC lib/util/crc32_ieee.o 00:05:06.258 CC lib/util/crc64.o 00:05:06.258 CC lib/util/dif.o 00:05:06.258 CC lib/util/fd.o 00:05:06.258 CC lib/dma/dma.o 00:05:06.258 CC lib/util/fd_group.o 00:05:06.258 CC lib/util/file.o 00:05:06.258 CC lib/util/hexlify.o 00:05:06.258 CC lib/ioat/ioat.o 00:05:06.258 CC lib/util/iov.o 00:05:06.258 CC lib/util/math.o 00:05:06.258 CC lib/util/net.o 00:05:06.258 CC lib/util/pipe.o 00:05:06.258 CC lib/util/strerror_tls.o 00:05:06.258 CC lib/util/string.o 00:05:06.258 CC lib/util/uuid.o 00:05:06.258 CC lib/util/xor.o 00:05:06.258 CC lib/util/zipf.o 00:05:06.258 CC lib/util/md5.o 00:05:06.258 CC lib/vfio_user/host/vfio_user_pci.o 00:05:06.258 CC lib/vfio_user/host/vfio_user.o 00:05:06.258 LIB libspdk_ioat.a 00:05:06.258 LIB libspdk_dma.a 00:05:06.258 SO libspdk_ioat.so.7.0 00:05:06.258 SO libspdk_dma.so.5.0 00:05:06.258 SYMLINK libspdk_dma.so 00:05:06.258 SYMLINK libspdk_ioat.so 00:05:06.258 LIB libspdk_vfio_user.a 00:05:06.258 SO libspdk_vfio_user.so.5.0 00:05:06.258 SYMLINK libspdk_vfio_user.so 00:05:06.258 LIB libspdk_util.a 00:05:06.258 SO libspdk_util.so.10.0 00:05:06.258 SYMLINK libspdk_util.so 00:05:06.258 CC lib/env_dpdk/env.o 00:05:06.258 CC lib/env_dpdk/memory.o 00:05:06.258 CC lib/json/json_parse.o 00:05:06.258 CC lib/env_dpdk/pci.o 00:05:06.258 CC lib/json/json_util.o 00:05:06.258 CC lib/json/json_write.o 00:05:06.258 CC lib/env_dpdk/init.o 00:05:06.258 CC lib/vmd/vmd.o 00:05:06.258 CC lib/conf/conf.o 00:05:06.258 CC lib/env_dpdk/threads.o 00:05:06.258 CC lib/vmd/led.o 00:05:06.258 CC lib/env_dpdk/pci_ioat.o 00:05:06.258 CC lib/env_dpdk/pci_virtio.o 00:05:06.258 CC lib/env_dpdk/pci_vmd.o 00:05:06.258 CC lib/env_dpdk/pci_idxd.o 00:05:06.258 CC lib/env_dpdk/pci_event.o 00:05:06.258 CC lib/idxd/idxd.o 00:05:06.258 CC lib/env_dpdk/sigbus_handler.o 00:05:06.258 CC lib/idxd/idxd_user.o 00:05:06.258 CC lib/env_dpdk/pci_dpdk.o 00:05:06.258 CC lib/idxd/idxd_kernel.o 00:05:06.258 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:06.258 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:06.258 CC lib/rdma_utils/rdma_utils.o 00:05:06.258 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:06.258 CC lib/rdma_provider/common.o 00:05:06.258 LIB libspdk_trace_parser.a 00:05:06.258 SO libspdk_trace_parser.so.6.0 00:05:06.258 LIB libspdk_conf.a 00:05:06.258 SO libspdk_conf.so.6.0 00:05:06.258 SYMLINK libspdk_trace_parser.so 00:05:06.258 LIB libspdk_rdma_provider.a 00:05:06.258 LIB libspdk_json.a 00:05:06.258 LIB libspdk_rdma_utils.a 00:05:06.258 SYMLINK libspdk_conf.so 00:05:06.258 SO libspdk_rdma_provider.so.6.0 00:05:06.258 SO libspdk_rdma_utils.so.1.0 00:05:06.258 SO libspdk_json.so.6.0 00:05:06.258 SYMLINK libspdk_rdma_provider.so 00:05:06.258 SYMLINK libspdk_rdma_utils.so 00:05:06.258 SYMLINK libspdk_json.so 00:05:06.519 CC lib/jsonrpc/jsonrpc_server.o 00:05:06.519 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:06.519 CC lib/jsonrpc/jsonrpc_client.o 00:05:06.519 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:06.519 LIB libspdk_vmd.a 00:05:06.519 LIB libspdk_idxd.a 00:05:06.519 SO libspdk_vmd.so.6.0 00:05:06.777 SO libspdk_idxd.so.12.1 00:05:06.777 SYMLINK libspdk_vmd.so 00:05:06.777 SYMLINK libspdk_idxd.so 00:05:06.777 LIB libspdk_jsonrpc.a 00:05:06.777 SO libspdk_jsonrpc.so.6.0 00:05:07.036 SYMLINK libspdk_jsonrpc.so 00:05:07.036 CC lib/rpc/rpc.o 00:05:07.604 LIB libspdk_rpc.a 00:05:07.604 SO libspdk_rpc.so.6.0 00:05:07.604 SYMLINK libspdk_rpc.so 00:05:07.604 CC lib/keyring/keyring.o 00:05:07.604 CC lib/keyring/keyring_rpc.o 00:05:07.604 CC lib/trace/trace.o 00:05:07.604 CC lib/trace/trace_flags.o 00:05:07.604 CC lib/trace/trace_rpc.o 00:05:07.604 CC lib/notify/notify.o 00:05:07.604 CC lib/notify/notify_rpc.o 00:05:07.863 LIB libspdk_notify.a 00:05:07.863 SO libspdk_notify.so.6.0 00:05:07.863 SYMLINK libspdk_notify.so 00:05:08.122 LIB libspdk_keyring.a 00:05:08.122 LIB libspdk_trace.a 00:05:08.122 SO libspdk_keyring.so.2.0 00:05:08.122 SO libspdk_trace.so.11.0 00:05:08.122 SYMLINK libspdk_keyring.so 00:05:08.122 SYMLINK libspdk_trace.so 00:05:08.381 LIB libspdk_env_dpdk.a 00:05:08.381 CC lib/sock/sock.o 00:05:08.381 CC lib/sock/sock_rpc.o 00:05:08.381 SO libspdk_env_dpdk.so.15.0 00:05:08.381 CC lib/thread/thread.o 00:05:08.381 CC lib/thread/iobuf.o 00:05:08.639 SYMLINK libspdk_env_dpdk.so 00:05:09.207 LIB libspdk_sock.a 00:05:09.207 SO libspdk_sock.so.10.0 00:05:09.207 SYMLINK libspdk_sock.so 00:05:09.465 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:09.465 CC lib/nvme/nvme_ctrlr.o 00:05:09.466 CC lib/nvme/nvme_fabric.o 00:05:09.466 CC lib/nvme/nvme_ns_cmd.o 00:05:09.466 CC lib/nvme/nvme_ns.o 00:05:09.466 CC lib/nvme/nvme_pcie_common.o 00:05:09.466 CC lib/nvme/nvme_pcie.o 00:05:09.466 CC lib/nvme/nvme_qpair.o 00:05:09.466 CC lib/nvme/nvme.o 00:05:09.466 CC lib/nvme/nvme_quirks.o 00:05:09.466 CC lib/nvme/nvme_transport.o 00:05:09.466 CC lib/nvme/nvme_discovery.o 00:05:09.466 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:09.466 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:09.466 CC lib/nvme/nvme_tcp.o 00:05:09.466 CC lib/nvme/nvme_opal.o 00:05:09.466 CC lib/nvme/nvme_io_msg.o 00:05:09.466 CC lib/nvme/nvme_poll_group.o 00:05:09.466 CC lib/nvme/nvme_zns.o 00:05:09.466 CC lib/nvme/nvme_stubs.o 00:05:09.466 CC lib/nvme/nvme_auth.o 00:05:09.466 CC lib/nvme/nvme_cuse.o 00:05:09.466 CC lib/nvme/nvme_rdma.o 00:05:09.466 CC lib/nvme/nvme_vfio_user.o 00:05:10.402 LIB libspdk_thread.a 00:05:10.402 SO libspdk_thread.so.10.2 00:05:10.402 SYMLINK libspdk_thread.so 00:05:10.402 CC lib/init/json_config.o 00:05:10.402 CC lib/fsdev/fsdev.o 00:05:10.402 CC lib/fsdev/fsdev_io.o 00:05:10.402 CC lib/init/subsystem.o 00:05:10.402 CC lib/fsdev/fsdev_rpc.o 00:05:10.402 CC lib/init/subsystem_rpc.o 00:05:10.402 CC lib/init/rpc.o 00:05:10.402 CC lib/accel/accel.o 00:05:10.402 CC lib/virtio/virtio.o 00:05:10.402 CC lib/blob/request.o 00:05:10.402 CC lib/blob/blobstore.o 00:05:10.402 CC lib/vfu_tgt/tgt_endpoint.o 00:05:10.402 CC lib/vfu_tgt/tgt_rpc.o 00:05:10.402 CC lib/accel/accel_rpc.o 00:05:10.402 CC lib/accel/accel_sw.o 00:05:10.402 CC lib/virtio/virtio_vhost_user.o 00:05:10.402 CC lib/blob/zeroes.o 00:05:10.402 CC lib/blob/blob_bs_dev.o 00:05:10.402 CC lib/virtio/virtio_vfio_user.o 00:05:10.402 CC lib/virtio/virtio_pci.o 00:05:10.969 LIB libspdk_init.a 00:05:10.969 SO libspdk_init.so.6.0 00:05:10.969 LIB libspdk_virtio.a 00:05:10.969 SO libspdk_virtio.so.7.0 00:05:10.969 SYMLINK libspdk_init.so 00:05:10.969 LIB libspdk_vfu_tgt.a 00:05:10.969 SYMLINK libspdk_virtio.so 00:05:10.969 SO libspdk_vfu_tgt.so.3.0 00:05:10.969 SYMLINK libspdk_vfu_tgt.so 00:05:11.253 CC lib/event/app.o 00:05:11.253 CC lib/event/reactor.o 00:05:11.253 CC lib/event/log_rpc.o 00:05:11.253 CC lib/event/app_rpc.o 00:05:11.253 CC lib/event/scheduler_static.o 00:05:11.253 LIB libspdk_fsdev.a 00:05:11.253 SO libspdk_fsdev.so.1.0 00:05:11.253 SYMLINK libspdk_fsdev.so 00:05:11.511 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:11.511 LIB libspdk_event.a 00:05:11.511 SO libspdk_event.so.15.0 00:05:11.769 SYMLINK libspdk_event.so 00:05:11.769 LIB libspdk_accel.a 00:05:11.769 SO libspdk_accel.so.16.0 00:05:11.769 SYMLINK libspdk_accel.so 00:05:12.026 CC lib/bdev/bdev.o 00:05:12.026 CC lib/bdev/bdev_rpc.o 00:05:12.026 CC lib/bdev/bdev_zone.o 00:05:12.026 CC lib/bdev/part.o 00:05:12.026 CC lib/bdev/scsi_nvme.o 00:05:12.026 LIB libspdk_fuse_dispatcher.a 00:05:12.026 SO libspdk_fuse_dispatcher.so.1.0 00:05:12.284 SYMLINK libspdk_fuse_dispatcher.so 00:05:12.284 LIB libspdk_nvme.a 00:05:12.284 SO libspdk_nvme.so.14.0 00:05:12.542 SYMLINK libspdk_nvme.so 00:05:13.916 LIB libspdk_blob.a 00:05:13.916 SO libspdk_blob.so.11.0 00:05:13.916 SYMLINK libspdk_blob.so 00:05:13.916 CC lib/lvol/lvol.o 00:05:13.916 CC lib/blobfs/blobfs.o 00:05:13.916 CC lib/blobfs/tree.o 00:05:15.290 LIB libspdk_blobfs.a 00:05:15.290 SO libspdk_blobfs.so.10.0 00:05:15.290 LIB libspdk_lvol.a 00:05:15.290 SO libspdk_lvol.so.10.0 00:05:15.290 SYMLINK libspdk_blobfs.so 00:05:15.290 SYMLINK libspdk_lvol.so 00:05:17.201 LIB libspdk_bdev.a 00:05:17.201 SO libspdk_bdev.so.17.0 00:05:17.201 SYMLINK libspdk_bdev.so 00:05:17.201 CC lib/nbd/nbd.o 00:05:17.201 CC lib/ftl/ftl_core.o 00:05:17.201 CC lib/ftl/ftl_init.o 00:05:17.201 CC lib/nbd/nbd_rpc.o 00:05:17.201 CC lib/ftl/ftl_layout.o 00:05:17.201 CC lib/nvmf/ctrlr.o 00:05:17.201 CC lib/nvmf/ctrlr_discovery.o 00:05:17.201 CC lib/ftl/ftl_debug.o 00:05:17.201 CC lib/nvmf/ctrlr_bdev.o 00:05:17.201 CC lib/ftl/ftl_io.o 00:05:17.201 CC lib/nvmf/subsystem.o 00:05:17.201 CC lib/ftl/ftl_sb.o 00:05:17.201 CC lib/nvmf/nvmf.o 00:05:17.201 CC lib/nvmf/nvmf_rpc.o 00:05:17.201 CC lib/ftl/ftl_l2p.o 00:05:17.201 CC lib/ftl/ftl_l2p_flat.o 00:05:17.201 CC lib/nvmf/transport.o 00:05:17.201 CC lib/ftl/ftl_nv_cache.o 00:05:17.201 CC lib/nvmf/tcp.o 00:05:17.201 CC lib/ftl/ftl_band.o 00:05:17.201 CC lib/nvmf/stubs.o 00:05:17.201 CC lib/ftl/ftl_band_ops.o 00:05:17.201 CC lib/ftl/ftl_writer.o 00:05:17.201 CC lib/nvmf/vfio_user.o 00:05:17.201 CC lib/nvmf/mdns_server.o 00:05:17.201 CC lib/nvmf/rdma.o 00:05:17.201 CC lib/ftl/ftl_rq.o 00:05:17.201 CC lib/scsi/dev.o 00:05:17.201 CC lib/ftl/ftl_reloc.o 00:05:17.201 CC lib/scsi/lun.o 00:05:17.201 CC lib/nvmf/auth.o 00:05:17.201 CC lib/scsi/port.o 00:05:17.201 CC lib/ftl/ftl_l2p_cache.o 00:05:17.201 CC lib/scsi/scsi.o 00:05:17.201 CC lib/ublk/ublk.o 00:05:17.201 CC lib/ftl/ftl_p2l.o 00:05:17.201 CC lib/scsi/scsi_bdev.o 00:05:17.201 CC lib/ublk/ublk_rpc.o 00:05:17.201 CC lib/ftl/ftl_p2l_log.o 00:05:17.201 CC lib/scsi/scsi_pr.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt.o 00:05:17.201 CC lib/scsi/scsi_rpc.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:17.201 CC lib/scsi/task.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:17.201 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:17.772 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:17.772 CC lib/ftl/utils/ftl_conf.o 00:05:17.772 CC lib/ftl/utils/ftl_md.o 00:05:17.772 CC lib/ftl/utils/ftl_mempool.o 00:05:17.772 CC lib/ftl/utils/ftl_bitmap.o 00:05:17.772 CC lib/ftl/utils/ftl_property.o 00:05:17.772 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:17.772 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:17.772 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:17.772 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:17.772 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:17.772 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:18.033 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:18.033 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:18.033 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:18.033 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:18.033 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:18.033 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:18.033 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:18.033 CC lib/ftl/base/ftl_base_dev.o 00:05:18.033 CC lib/ftl/base/ftl_base_bdev.o 00:05:18.033 CC lib/ftl/ftl_trace.o 00:05:18.033 LIB libspdk_nbd.a 00:05:18.292 SO libspdk_nbd.so.7.0 00:05:18.292 LIB libspdk_scsi.a 00:05:18.292 SYMLINK libspdk_nbd.so 00:05:18.292 SO libspdk_scsi.so.9.0 00:05:18.292 SYMLINK libspdk_scsi.so 00:05:18.551 LIB libspdk_ublk.a 00:05:18.551 SO libspdk_ublk.so.3.0 00:05:18.551 CC lib/iscsi/conn.o 00:05:18.551 CC lib/vhost/vhost.o 00:05:18.551 CC lib/vhost/vhost_rpc.o 00:05:18.551 CC lib/iscsi/init_grp.o 00:05:18.551 CC lib/vhost/vhost_scsi.o 00:05:18.551 CC lib/iscsi/iscsi.o 00:05:18.551 CC lib/vhost/vhost_blk.o 00:05:18.551 CC lib/iscsi/param.o 00:05:18.551 CC lib/vhost/rte_vhost_user.o 00:05:18.551 CC lib/iscsi/portal_grp.o 00:05:18.551 CC lib/iscsi/tgt_node.o 00:05:18.551 CC lib/iscsi/iscsi_subsystem.o 00:05:18.551 CC lib/iscsi/iscsi_rpc.o 00:05:18.551 CC lib/iscsi/task.o 00:05:18.551 SYMLINK libspdk_ublk.so 00:05:18.809 LIB libspdk_ftl.a 00:05:19.079 SO libspdk_ftl.so.9.0 00:05:19.337 SYMLINK libspdk_ftl.so 00:05:19.905 LIB libspdk_vhost.a 00:05:19.905 SO libspdk_vhost.so.8.0 00:05:19.905 LIB libspdk_nvmf.a 00:05:20.163 SO libspdk_nvmf.so.19.0 00:05:20.163 SYMLINK libspdk_vhost.so 00:05:20.163 LIB libspdk_iscsi.a 00:05:20.163 SO libspdk_iscsi.so.8.0 00:05:20.421 SYMLINK libspdk_iscsi.so 00:05:20.421 SYMLINK libspdk_nvmf.so 00:05:20.679 CC module/env_dpdk/env_dpdk_rpc.o 00:05:20.679 CC module/vfu_device/vfu_virtio.o 00:05:20.679 CC module/vfu_device/vfu_virtio_blk.o 00:05:20.679 CC module/vfu_device/vfu_virtio_scsi.o 00:05:20.679 CC module/vfu_device/vfu_virtio_rpc.o 00:05:20.679 CC module/vfu_device/vfu_virtio_fs.o 00:05:20.679 CC module/sock/posix/posix.o 00:05:20.679 CC module/accel/ioat/accel_ioat.o 00:05:20.679 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:20.679 CC module/accel/ioat/accel_ioat_rpc.o 00:05:20.679 CC module/keyring/file/keyring.o 00:05:20.679 CC module/keyring/file/keyring_rpc.o 00:05:20.679 CC module/scheduler/gscheduler/gscheduler.o 00:05:20.679 CC module/blob/bdev/blob_bdev.o 00:05:20.679 CC module/fsdev/aio/fsdev_aio.o 00:05:20.679 CC module/accel/error/accel_error.o 00:05:20.679 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:20.679 CC module/accel/iaa/accel_iaa.o 00:05:20.679 CC module/fsdev/aio/linux_aio_mgr.o 00:05:20.679 CC module/accel/iaa/accel_iaa_rpc.o 00:05:20.679 CC module/keyring/linux/keyring.o 00:05:20.679 CC module/accel/error/accel_error_rpc.o 00:05:20.679 CC module/keyring/linux/keyring_rpc.o 00:05:20.679 CC module/accel/dsa/accel_dsa_rpc.o 00:05:20.679 CC module/accel/dsa/accel_dsa.o 00:05:20.679 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:20.938 LIB libspdk_env_dpdk_rpc.a 00:05:20.938 SO libspdk_env_dpdk_rpc.so.6.0 00:05:20.938 LIB libspdk_keyring_file.a 00:05:20.938 SYMLINK libspdk_env_dpdk_rpc.so 00:05:20.938 LIB libspdk_scheduler_gscheduler.a 00:05:20.938 LIB libspdk_scheduler_dpdk_governor.a 00:05:20.938 SO libspdk_scheduler_gscheduler.so.4.0 00:05:20.938 SO libspdk_keyring_file.so.2.0 00:05:20.938 LIB libspdk_accel_error.a 00:05:20.938 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:20.938 LIB libspdk_accel_ioat.a 00:05:20.938 LIB libspdk_scheduler_dynamic.a 00:05:20.938 SO libspdk_accel_error.so.2.0 00:05:20.938 SO libspdk_accel_ioat.so.6.0 00:05:20.938 LIB libspdk_accel_iaa.a 00:05:20.938 SYMLINK libspdk_scheduler_gscheduler.so 00:05:20.938 SO libspdk_scheduler_dynamic.so.4.0 00:05:20.938 LIB libspdk_keyring_linux.a 00:05:20.938 SYMLINK libspdk_keyring_file.so 00:05:20.938 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:20.938 SO libspdk_accel_iaa.so.3.0 00:05:20.938 SO libspdk_keyring_linux.so.1.0 00:05:20.938 SYMLINK libspdk_accel_error.so 00:05:20.938 SYMLINK libspdk_accel_ioat.so 00:05:20.938 SYMLINK libspdk_scheduler_dynamic.so 00:05:21.197 SYMLINK libspdk_accel_iaa.so 00:05:21.197 LIB libspdk_accel_dsa.a 00:05:21.197 SYMLINK libspdk_keyring_linux.so 00:05:21.197 SO libspdk_accel_dsa.so.5.0 00:05:21.197 LIB libspdk_blob_bdev.a 00:05:21.197 SYMLINK libspdk_accel_dsa.so 00:05:21.197 SO libspdk_blob_bdev.so.11.0 00:05:21.197 SYMLINK libspdk_blob_bdev.so 00:05:21.459 LIB libspdk_vfu_device.a 00:05:21.459 SO libspdk_vfu_device.so.3.0 00:05:21.459 LIB libspdk_fsdev_aio.a 00:05:21.459 CC module/bdev/gpt/gpt.o 00:05:21.459 CC module/bdev/gpt/vbdev_gpt.o 00:05:21.459 CC module/bdev/error/vbdev_error.o 00:05:21.459 CC module/bdev/error/vbdev_error_rpc.o 00:05:21.459 CC module/bdev/passthru/vbdev_passthru.o 00:05:21.459 CC module/bdev/null/bdev_null.o 00:05:21.459 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:21.459 CC module/bdev/null/bdev_null_rpc.o 00:05:21.459 CC module/bdev/malloc/bdev_malloc.o 00:05:21.459 CC module/bdev/lvol/vbdev_lvol.o 00:05:21.459 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:21.459 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:21.459 CC module/blobfs/bdev/blobfs_bdev.o 00:05:21.459 CC module/bdev/delay/vbdev_delay.o 00:05:21.459 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:21.459 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:21.459 CC module/bdev/split/vbdev_split_rpc.o 00:05:21.459 CC module/bdev/split/vbdev_split.o 00:05:21.459 CC module/bdev/ftl/bdev_ftl.o 00:05:21.459 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:21.459 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:21.459 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:21.459 CC module/bdev/iscsi/bdev_iscsi.o 00:05:21.459 CC module/bdev/raid/bdev_raid.o 00:05:21.459 CC module/bdev/raid/bdev_raid_rpc.o 00:05:21.459 SO libspdk_fsdev_aio.so.1.0 00:05:21.459 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:21.459 CC module/bdev/aio/bdev_aio.o 00:05:21.459 CC module/bdev/raid/bdev_raid_sb.o 00:05:21.459 CC module/bdev/nvme/bdev_nvme.o 00:05:21.459 CC module/bdev/raid/raid0.o 00:05:21.459 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:21.459 CC module/bdev/aio/bdev_aio_rpc.o 00:05:21.459 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:21.459 CC module/bdev/nvme/nvme_rpc.o 00:05:21.459 CC module/bdev/raid/raid1.o 00:05:21.459 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:21.459 CC module/bdev/raid/concat.o 00:05:21.459 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:21.459 CC module/bdev/nvme/bdev_mdns_client.o 00:05:21.459 CC module/bdev/nvme/vbdev_opal.o 00:05:21.459 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:21.459 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:21.459 SYMLINK libspdk_vfu_device.so 00:05:21.718 SYMLINK libspdk_fsdev_aio.so 00:05:21.718 LIB libspdk_sock_posix.a 00:05:21.718 SO libspdk_sock_posix.so.6.0 00:05:21.976 LIB libspdk_blobfs_bdev.a 00:05:21.976 SO libspdk_blobfs_bdev.so.6.0 00:05:21.976 SYMLINK libspdk_blobfs_bdev.so 00:05:21.976 SYMLINK libspdk_sock_posix.so 00:05:21.976 LIB libspdk_bdev_split.a 00:05:21.976 LIB libspdk_bdev_error.a 00:05:21.976 LIB libspdk_bdev_iscsi.a 00:05:21.976 LIB libspdk_bdev_null.a 00:05:21.976 SO libspdk_bdev_split.so.6.0 00:05:21.976 SO libspdk_bdev_error.so.6.0 00:05:21.976 SO libspdk_bdev_iscsi.so.6.0 00:05:21.976 LIB libspdk_bdev_gpt.a 00:05:21.976 SO libspdk_bdev_null.so.6.0 00:05:21.976 LIB libspdk_bdev_passthru.a 00:05:21.976 SO libspdk_bdev_gpt.so.6.0 00:05:21.976 LIB libspdk_bdev_ftl.a 00:05:21.976 SO libspdk_bdev_passthru.so.6.0 00:05:21.976 SYMLINK libspdk_bdev_split.so 00:05:21.976 SYMLINK libspdk_bdev_error.so 00:05:21.976 SO libspdk_bdev_ftl.so.6.0 00:05:21.976 SYMLINK libspdk_bdev_iscsi.so 00:05:21.976 SYMLINK libspdk_bdev_null.so 00:05:22.234 SYMLINK libspdk_bdev_gpt.so 00:05:22.234 LIB libspdk_bdev_malloc.a 00:05:22.234 SYMLINK libspdk_bdev_passthru.so 00:05:22.234 LIB libspdk_bdev_zone_block.a 00:05:22.234 LIB libspdk_bdev_delay.a 00:05:22.234 SO libspdk_bdev_malloc.so.6.0 00:05:22.234 SYMLINK libspdk_bdev_ftl.so 00:05:22.234 LIB libspdk_bdev_aio.a 00:05:22.234 SO libspdk_bdev_zone_block.so.6.0 00:05:22.234 SO libspdk_bdev_delay.so.6.0 00:05:22.234 SO libspdk_bdev_aio.so.6.0 00:05:22.234 SYMLINK libspdk_bdev_malloc.so 00:05:22.234 SYMLINK libspdk_bdev_zone_block.so 00:05:22.234 SYMLINK libspdk_bdev_delay.so 00:05:22.234 SYMLINK libspdk_bdev_aio.so 00:05:22.234 LIB libspdk_bdev_virtio.a 00:05:22.234 LIB libspdk_bdev_lvol.a 00:05:22.234 SO libspdk_bdev_virtio.so.6.0 00:05:22.234 SO libspdk_bdev_lvol.so.6.0 00:05:22.234 SYMLINK libspdk_bdev_virtio.so 00:05:22.494 SYMLINK libspdk_bdev_lvol.so 00:05:22.751 LIB libspdk_bdev_raid.a 00:05:22.752 SO libspdk_bdev_raid.so.6.0 00:05:23.010 SYMLINK libspdk_bdev_raid.so 00:05:26.290 LIB libspdk_bdev_nvme.a 00:05:26.290 SO libspdk_bdev_nvme.so.7.0 00:05:26.290 SYMLINK libspdk_bdev_nvme.so 00:05:26.546 CC module/event/subsystems/vmd/vmd.o 00:05:26.546 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:26.546 CC module/event/subsystems/iobuf/iobuf.o 00:05:26.546 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:26.546 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:26.546 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:26.546 CC module/event/subsystems/sock/sock.o 00:05:26.546 CC module/event/subsystems/keyring/keyring.o 00:05:26.546 CC module/event/subsystems/fsdev/fsdev.o 00:05:26.546 CC module/event/subsystems/scheduler/scheduler.o 00:05:26.804 LIB libspdk_event_vfu_tgt.a 00:05:26.804 LIB libspdk_event_fsdev.a 00:05:26.804 LIB libspdk_event_vhost_blk.a 00:05:26.804 LIB libspdk_event_iobuf.a 00:05:26.804 SO libspdk_event_vfu_tgt.so.3.0 00:05:26.804 SO libspdk_event_vhost_blk.so.3.0 00:05:26.804 LIB libspdk_event_keyring.a 00:05:26.804 SO libspdk_event_fsdev.so.1.0 00:05:26.804 LIB libspdk_event_vmd.a 00:05:26.804 LIB libspdk_event_scheduler.a 00:05:26.804 LIB libspdk_event_sock.a 00:05:26.804 SO libspdk_event_iobuf.so.3.0 00:05:26.804 SO libspdk_event_keyring.so.1.0 00:05:26.804 SO libspdk_event_scheduler.so.4.0 00:05:26.804 SO libspdk_event_vmd.so.6.0 00:05:26.804 SO libspdk_event_sock.so.5.0 00:05:26.804 SYMLINK libspdk_event_vhost_blk.so 00:05:26.804 SYMLINK libspdk_event_vfu_tgt.so 00:05:26.804 SYMLINK libspdk_event_fsdev.so 00:05:26.804 SYMLINK libspdk_event_keyring.so 00:05:26.804 SYMLINK libspdk_event_iobuf.so 00:05:26.804 SYMLINK libspdk_event_sock.so 00:05:26.804 SYMLINK libspdk_event_scheduler.so 00:05:26.804 SYMLINK libspdk_event_vmd.so 00:05:27.062 CC module/event/subsystems/accel/accel.o 00:05:27.320 LIB libspdk_event_accel.a 00:05:27.577 SO libspdk_event_accel.so.6.0 00:05:27.577 SYMLINK libspdk_event_accel.so 00:05:27.835 CC module/event/subsystems/bdev/bdev.o 00:05:28.094 LIB libspdk_event_bdev.a 00:05:28.094 SO libspdk_event_bdev.so.6.0 00:05:28.351 SYMLINK libspdk_event_bdev.so 00:05:28.351 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:28.351 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:28.351 CC module/event/subsystems/ublk/ublk.o 00:05:28.351 CC module/event/subsystems/scsi/scsi.o 00:05:28.351 CC module/event/subsystems/nbd/nbd.o 00:05:28.609 LIB libspdk_event_nbd.a 00:05:28.609 LIB libspdk_event_ublk.a 00:05:28.609 LIB libspdk_event_scsi.a 00:05:28.609 SO libspdk_event_nbd.so.6.0 00:05:28.609 SO libspdk_event_ublk.so.3.0 00:05:28.609 SO libspdk_event_scsi.so.6.0 00:05:28.868 SYMLINK libspdk_event_nbd.so 00:05:28.868 SYMLINK libspdk_event_ublk.so 00:05:28.868 SYMLINK libspdk_event_scsi.so 00:05:28.868 LIB libspdk_event_nvmf.a 00:05:28.868 SO libspdk_event_nvmf.so.6.0 00:05:29.128 SYMLINK libspdk_event_nvmf.so 00:05:29.128 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:29.128 CC module/event/subsystems/iscsi/iscsi.o 00:05:29.128 LIB libspdk_event_vhost_scsi.a 00:05:29.128 SO libspdk_event_vhost_scsi.so.3.0 00:05:29.128 LIB libspdk_event_iscsi.a 00:05:29.387 SO libspdk_event_iscsi.so.6.0 00:05:29.387 SYMLINK libspdk_event_vhost_scsi.so 00:05:29.387 SYMLINK libspdk_event_iscsi.so 00:05:29.387 SO libspdk.so.6.0 00:05:29.387 SYMLINK libspdk.so 00:05:29.652 CC app/trace_record/trace_record.o 00:05:29.652 CC app/spdk_nvme_identify/identify.o 00:05:29.652 CXX app/trace/trace.o 00:05:29.652 CC app/spdk_nvme_discover/discovery_aer.o 00:05:29.652 CC app/spdk_nvme_perf/perf.o 00:05:29.652 CC app/spdk_lspci/spdk_lspci.o 00:05:29.652 CC app/spdk_top/spdk_top.o 00:05:29.652 TEST_HEADER include/spdk/accel.h 00:05:29.652 TEST_HEADER include/spdk/accel_module.h 00:05:29.652 TEST_HEADER include/spdk/assert.h 00:05:29.652 CC test/rpc_client/rpc_client_test.o 00:05:29.652 TEST_HEADER include/spdk/barrier.h 00:05:29.652 TEST_HEADER include/spdk/base64.h 00:05:29.652 TEST_HEADER include/spdk/bdev.h 00:05:29.652 TEST_HEADER include/spdk/bdev_module.h 00:05:29.652 TEST_HEADER include/spdk/bdev_zone.h 00:05:29.652 TEST_HEADER include/spdk/bit_array.h 00:05:29.652 TEST_HEADER include/spdk/bit_pool.h 00:05:29.652 TEST_HEADER include/spdk/blob_bdev.h 00:05:29.652 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:29.652 TEST_HEADER include/spdk/blobfs.h 00:05:29.652 TEST_HEADER include/spdk/blob.h 00:05:29.652 TEST_HEADER include/spdk/conf.h 00:05:29.652 TEST_HEADER include/spdk/config.h 00:05:29.652 TEST_HEADER include/spdk/cpuset.h 00:05:29.652 TEST_HEADER include/spdk/crc16.h 00:05:29.652 TEST_HEADER include/spdk/crc32.h 00:05:29.652 TEST_HEADER include/spdk/crc64.h 00:05:29.652 TEST_HEADER include/spdk/dif.h 00:05:29.652 TEST_HEADER include/spdk/dma.h 00:05:29.652 TEST_HEADER include/spdk/endian.h 00:05:29.652 TEST_HEADER include/spdk/env_dpdk.h 00:05:29.652 TEST_HEADER include/spdk/env.h 00:05:29.652 TEST_HEADER include/spdk/event.h 00:05:29.652 TEST_HEADER include/spdk/fd_group.h 00:05:29.652 TEST_HEADER include/spdk/fd.h 00:05:29.652 TEST_HEADER include/spdk/file.h 00:05:29.652 TEST_HEADER include/spdk/fsdev_module.h 00:05:29.652 TEST_HEADER include/spdk/fsdev.h 00:05:29.652 TEST_HEADER include/spdk/ftl.h 00:05:29.652 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:29.652 TEST_HEADER include/spdk/gpt_spec.h 00:05:29.652 TEST_HEADER include/spdk/hexlify.h 00:05:29.652 TEST_HEADER include/spdk/histogram_data.h 00:05:29.652 TEST_HEADER include/spdk/idxd.h 00:05:29.652 TEST_HEADER include/spdk/idxd_spec.h 00:05:29.652 TEST_HEADER include/spdk/init.h 00:05:29.652 TEST_HEADER include/spdk/ioat.h 00:05:29.652 TEST_HEADER include/spdk/ioat_spec.h 00:05:29.652 TEST_HEADER include/spdk/iscsi_spec.h 00:05:29.652 TEST_HEADER include/spdk/json.h 00:05:29.652 TEST_HEADER include/spdk/jsonrpc.h 00:05:29.652 TEST_HEADER include/spdk/keyring.h 00:05:29.652 TEST_HEADER include/spdk/keyring_module.h 00:05:29.652 TEST_HEADER include/spdk/likely.h 00:05:29.652 TEST_HEADER include/spdk/log.h 00:05:29.652 TEST_HEADER include/spdk/lvol.h 00:05:29.652 TEST_HEADER include/spdk/md5.h 00:05:29.652 TEST_HEADER include/spdk/memory.h 00:05:29.652 TEST_HEADER include/spdk/mmio.h 00:05:29.652 TEST_HEADER include/spdk/nbd.h 00:05:29.652 TEST_HEADER include/spdk/net.h 00:05:29.652 TEST_HEADER include/spdk/notify.h 00:05:29.652 TEST_HEADER include/spdk/nvme.h 00:05:29.652 TEST_HEADER include/spdk/nvme_intel.h 00:05:29.652 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:29.652 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:29.652 TEST_HEADER include/spdk/nvme_spec.h 00:05:29.652 TEST_HEADER include/spdk/nvme_zns.h 00:05:29.652 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:29.652 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:29.652 TEST_HEADER include/spdk/nvmf.h 00:05:29.652 TEST_HEADER include/spdk/nvmf_spec.h 00:05:29.652 TEST_HEADER include/spdk/nvmf_transport.h 00:05:29.652 TEST_HEADER include/spdk/opal.h 00:05:29.652 TEST_HEADER include/spdk/opal_spec.h 00:05:29.652 TEST_HEADER include/spdk/pci_ids.h 00:05:29.652 TEST_HEADER include/spdk/pipe.h 00:05:29.652 TEST_HEADER include/spdk/queue.h 00:05:29.652 TEST_HEADER include/spdk/reduce.h 00:05:29.652 TEST_HEADER include/spdk/rpc.h 00:05:29.652 TEST_HEADER include/spdk/scheduler.h 00:05:29.652 TEST_HEADER include/spdk/scsi.h 00:05:29.652 TEST_HEADER include/spdk/sock.h 00:05:29.652 TEST_HEADER include/spdk/scsi_spec.h 00:05:29.652 TEST_HEADER include/spdk/string.h 00:05:29.652 TEST_HEADER include/spdk/stdinc.h 00:05:29.652 TEST_HEADER include/spdk/trace.h 00:05:29.652 TEST_HEADER include/spdk/thread.h 00:05:29.652 TEST_HEADER include/spdk/trace_parser.h 00:05:29.652 TEST_HEADER include/spdk/tree.h 00:05:29.652 TEST_HEADER include/spdk/ublk.h 00:05:29.652 TEST_HEADER include/spdk/uuid.h 00:05:29.652 TEST_HEADER include/spdk/util.h 00:05:29.652 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:29.652 TEST_HEADER include/spdk/version.h 00:05:29.652 CC app/spdk_dd/spdk_dd.o 00:05:29.652 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:29.652 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:29.652 TEST_HEADER include/spdk/vhost.h 00:05:29.652 TEST_HEADER include/spdk/vmd.h 00:05:29.652 TEST_HEADER include/spdk/xor.h 00:05:29.652 TEST_HEADER include/spdk/zipf.h 00:05:29.652 CXX test/cpp_headers/accel.o 00:05:29.652 CXX test/cpp_headers/accel_module.o 00:05:29.652 CXX test/cpp_headers/assert.o 00:05:29.652 CXX test/cpp_headers/barrier.o 00:05:29.652 CXX test/cpp_headers/base64.o 00:05:29.652 CXX test/cpp_headers/bdev.o 00:05:29.652 CC app/iscsi_tgt/iscsi_tgt.o 00:05:29.652 CXX test/cpp_headers/bdev_module.o 00:05:29.652 CXX test/cpp_headers/bdev_zone.o 00:05:29.652 CXX test/cpp_headers/bit_array.o 00:05:29.652 CXX test/cpp_headers/bit_pool.o 00:05:29.652 CXX test/cpp_headers/blob_bdev.o 00:05:29.652 CXX test/cpp_headers/blobfs_bdev.o 00:05:29.652 CXX test/cpp_headers/blobfs.o 00:05:29.652 CC app/nvmf_tgt/nvmf_main.o 00:05:29.652 CXX test/cpp_headers/blob.o 00:05:29.652 CXX test/cpp_headers/conf.o 00:05:29.652 CXX test/cpp_headers/config.o 00:05:29.652 CXX test/cpp_headers/cpuset.o 00:05:29.916 CXX test/cpp_headers/crc16.o 00:05:29.916 CXX test/cpp_headers/crc32.o 00:05:29.916 CC app/spdk_tgt/spdk_tgt.o 00:05:29.916 CC examples/ioat/verify/verify.o 00:05:29.916 CC examples/util/zipf/zipf.o 00:05:29.916 CC examples/ioat/perf/perf.o 00:05:29.916 CC app/fio/nvme/fio_plugin.o 00:05:29.916 CC test/app/histogram_perf/histogram_perf.o 00:05:29.916 CC test/app/jsoncat/jsoncat.o 00:05:29.916 CC test/env/vtophys/vtophys.o 00:05:29.916 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:29.916 CC test/env/memory/memory_ut.o 00:05:29.916 CC test/env/pci/pci_ut.o 00:05:29.916 CC test/thread/poller_perf/poller_perf.o 00:05:29.916 CC test/app/stub/stub.o 00:05:29.916 CC test/dma/test_dma/test_dma.o 00:05:29.916 CC app/fio/bdev/fio_plugin.o 00:05:29.916 CC test/app/bdev_svc/bdev_svc.o 00:05:29.916 LINK spdk_lspci 00:05:30.176 CC test/env/mem_callbacks/mem_callbacks.o 00:05:30.176 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:30.176 LINK rpc_client_test 00:05:30.176 LINK spdk_nvme_discover 00:05:30.176 CXX test/cpp_headers/crc64.o 00:05:30.176 LINK jsoncat 00:05:30.176 LINK vtophys 00:05:30.176 CXX test/cpp_headers/dif.o 00:05:30.176 LINK interrupt_tgt 00:05:30.176 CXX test/cpp_headers/dma.o 00:05:30.176 LINK histogram_perf 00:05:30.176 LINK poller_perf 00:05:30.176 LINK zipf 00:05:30.176 CXX test/cpp_headers/endian.o 00:05:30.176 LINK nvmf_tgt 00:05:30.176 LINK spdk_trace_record 00:05:30.176 CXX test/cpp_headers/env_dpdk.o 00:05:30.176 CXX test/cpp_headers/env.o 00:05:30.176 CXX test/cpp_headers/event.o 00:05:30.176 LINK env_dpdk_post_init 00:05:30.176 CXX test/cpp_headers/fd_group.o 00:05:30.176 CXX test/cpp_headers/file.o 00:05:30.176 CXX test/cpp_headers/fsdev.o 00:05:30.176 CXX test/cpp_headers/fd.o 00:05:30.176 LINK iscsi_tgt 00:05:30.176 CXX test/cpp_headers/fsdev_module.o 00:05:30.440 CXX test/cpp_headers/ftl.o 00:05:30.440 LINK stub 00:05:30.440 CXX test/cpp_headers/fuse_dispatcher.o 00:05:30.440 CXX test/cpp_headers/gpt_spec.o 00:05:30.440 LINK verify 00:05:30.440 LINK bdev_svc 00:05:30.440 CXX test/cpp_headers/hexlify.o 00:05:30.440 LINK spdk_tgt 00:05:30.440 LINK ioat_perf 00:05:30.440 CXX test/cpp_headers/histogram_data.o 00:05:30.440 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:30.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:30.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:30.440 CXX test/cpp_headers/idxd.o 00:05:30.440 CXX test/cpp_headers/idxd_spec.o 00:05:30.440 LINK spdk_dd 00:05:30.726 CXX test/cpp_headers/init.o 00:05:30.726 CXX test/cpp_headers/ioat.o 00:05:30.726 CXX test/cpp_headers/ioat_spec.o 00:05:30.726 LINK spdk_trace 00:05:30.726 CXX test/cpp_headers/iscsi_spec.o 00:05:30.726 CXX test/cpp_headers/json.o 00:05:30.726 CXX test/cpp_headers/jsonrpc.o 00:05:30.726 CXX test/cpp_headers/keyring.o 00:05:30.726 CXX test/cpp_headers/keyring_module.o 00:05:30.726 CXX test/cpp_headers/likely.o 00:05:30.726 CXX test/cpp_headers/log.o 00:05:30.726 CXX test/cpp_headers/lvol.o 00:05:30.726 CXX test/cpp_headers/md5.o 00:05:30.726 CXX test/cpp_headers/memory.o 00:05:30.726 CXX test/cpp_headers/mmio.o 00:05:30.726 CXX test/cpp_headers/nbd.o 00:05:30.726 CXX test/cpp_headers/net.o 00:05:30.726 CXX test/cpp_headers/notify.o 00:05:30.726 LINK pci_ut 00:05:30.726 CXX test/cpp_headers/nvme.o 00:05:30.726 CXX test/cpp_headers/nvme_intel.o 00:05:30.726 CXX test/cpp_headers/nvme_ocssd.o 00:05:30.726 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:30.726 CXX test/cpp_headers/nvme_spec.o 00:05:30.726 CXX test/cpp_headers/nvme_zns.o 00:05:30.991 CXX test/cpp_headers/nvmf_cmd.o 00:05:30.991 CC test/event/event_perf/event_perf.o 00:05:30.991 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:30.991 CC test/event/reactor/reactor.o 00:05:30.991 CXX test/cpp_headers/nvmf.o 00:05:30.991 LINK nvme_fuzz 00:05:30.991 CXX test/cpp_headers/nvmf_spec.o 00:05:30.991 CC test/event/reactor_perf/reactor_perf.o 00:05:30.991 CC examples/sock/hello_world/hello_sock.o 00:05:30.991 CXX test/cpp_headers/nvmf_transport.o 00:05:30.991 CC examples/vmd/lsvmd/lsvmd.o 00:05:30.991 CC examples/idxd/perf/perf.o 00:05:30.991 CC examples/vmd/led/led.o 00:05:30.991 CC test/event/app_repeat/app_repeat.o 00:05:30.991 LINK test_dma 00:05:30.991 LINK spdk_bdev 00:05:30.991 LINK spdk_nvme 00:05:30.991 CXX test/cpp_headers/opal.o 00:05:30.991 CXX test/cpp_headers/opal_spec.o 00:05:30.991 CC examples/thread/thread/thread_ex.o 00:05:30.991 CXX test/cpp_headers/pci_ids.o 00:05:30.991 CC test/event/scheduler/scheduler.o 00:05:30.991 CXX test/cpp_headers/pipe.o 00:05:30.991 CXX test/cpp_headers/queue.o 00:05:30.991 CXX test/cpp_headers/reduce.o 00:05:30.991 CXX test/cpp_headers/rpc.o 00:05:30.991 CXX test/cpp_headers/scheduler.o 00:05:30.991 CXX test/cpp_headers/scsi.o 00:05:30.991 CXX test/cpp_headers/scsi_spec.o 00:05:30.991 CXX test/cpp_headers/sock.o 00:05:31.250 CXX test/cpp_headers/stdinc.o 00:05:31.250 CXX test/cpp_headers/string.o 00:05:31.250 CXX test/cpp_headers/thread.o 00:05:31.250 CXX test/cpp_headers/trace.o 00:05:31.250 CXX test/cpp_headers/trace_parser.o 00:05:31.250 CXX test/cpp_headers/tree.o 00:05:31.250 CXX test/cpp_headers/ublk.o 00:05:31.250 CXX test/cpp_headers/util.o 00:05:31.250 CXX test/cpp_headers/uuid.o 00:05:31.250 LINK event_perf 00:05:31.250 CXX test/cpp_headers/version.o 00:05:31.250 CC app/vhost/vhost.o 00:05:31.250 CXX test/cpp_headers/vfio_user_pci.o 00:05:31.250 CXX test/cpp_headers/vfio_user_spec.o 00:05:31.250 LINK reactor 00:05:31.250 CXX test/cpp_headers/vhost.o 00:05:31.250 CXX test/cpp_headers/xor.o 00:05:31.250 CXX test/cpp_headers/vmd.o 00:05:31.250 LINK spdk_nvme_perf 00:05:31.250 CXX test/cpp_headers/zipf.o 00:05:31.250 LINK reactor_perf 00:05:31.250 LINK vhost_fuzz 00:05:31.250 LINK lsvmd 00:05:31.250 LINK led 00:05:31.250 LINK app_repeat 00:05:31.250 LINK mem_callbacks 00:05:31.509 LINK spdk_top 00:05:31.509 LINK spdk_nvme_identify 00:05:31.509 LINK hello_sock 00:05:31.509 LINK scheduler 00:05:31.509 LINK thread 00:05:31.767 LINK vhost 00:05:31.767 CC test/nvme/sgl/sgl.o 00:05:31.767 CC test/nvme/simple_copy/simple_copy.o 00:05:31.767 CC test/nvme/fused_ordering/fused_ordering.o 00:05:31.767 CC test/nvme/err_injection/err_injection.o 00:05:31.767 CC test/nvme/aer/aer.o 00:05:31.767 CC test/nvme/reset/reset.o 00:05:31.767 CC test/nvme/e2edp/nvme_dp.o 00:05:31.767 CC test/nvme/startup/startup.o 00:05:31.767 CC test/nvme/reserve/reserve.o 00:05:31.767 CC test/nvme/boot_partition/boot_partition.o 00:05:31.767 CC test/nvme/fdp/fdp.o 00:05:31.767 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:31.767 CC test/nvme/connect_stress/connect_stress.o 00:05:31.767 CC test/nvme/compliance/nvme_compliance.o 00:05:31.767 CC test/nvme/overhead/overhead.o 00:05:31.767 CC test/nvme/cuse/cuse.o 00:05:31.767 LINK idxd_perf 00:05:31.767 CC test/blobfs/mkfs/mkfs.o 00:05:31.767 CC test/accel/dif/dif.o 00:05:31.767 CC test/lvol/esnap/esnap.o 00:05:32.026 LINK fused_ordering 00:05:32.026 LINK connect_stress 00:05:32.026 LINK boot_partition 00:05:32.026 CC examples/nvme/abort/abort.o 00:05:32.026 CC examples/nvme/hello_world/hello_world.o 00:05:32.026 CC examples/nvme/hotplug/hotplug.o 00:05:32.026 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:32.026 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:32.026 LINK reserve 00:05:32.026 CC examples/nvme/reconnect/reconnect.o 00:05:32.026 CC examples/nvme/arbitration/arbitration.o 00:05:32.026 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:32.026 LINK startup 00:05:32.026 LINK mkfs 00:05:32.026 LINK err_injection 00:05:32.026 LINK reset 00:05:32.026 LINK nvme_dp 00:05:32.026 CC examples/accel/perf/accel_perf.o 00:05:32.026 LINK overhead 00:05:32.026 LINK doorbell_aers 00:05:32.026 LINK aer 00:05:32.026 CC examples/blob/hello_world/hello_blob.o 00:05:32.026 CC examples/blob/cli/blobcli.o 00:05:32.026 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:32.026 LINK fdp 00:05:32.026 LINK memory_ut 00:05:32.026 LINK simple_copy 00:05:32.284 LINK sgl 00:05:32.284 LINK cmb_copy 00:05:32.284 LINK nvme_compliance 00:05:32.284 LINK hello_world 00:05:32.284 LINK pmr_persistence 00:05:32.284 LINK hotplug 00:05:32.541 LINK hello_fsdev 00:05:32.541 LINK arbitration 00:05:32.541 LINK hello_blob 00:05:32.541 LINK reconnect 00:05:32.541 LINK abort 00:05:32.541 LINK dif 00:05:32.541 LINK blobcli 00:05:32.799 LINK accel_perf 00:05:32.799 LINK nvme_manage 00:05:33.058 CC test/bdev/bdevio/bdevio.o 00:05:33.058 CC examples/bdev/hello_world/hello_bdev.o 00:05:33.058 LINK iscsi_fuzz 00:05:33.058 CC examples/bdev/bdevperf/bdevperf.o 00:05:33.625 LINK hello_bdev 00:05:33.625 LINK cuse 00:05:33.884 LINK bdevio 00:05:34.451 LINK bdevperf 00:05:35.051 CC examples/nvmf/nvmf/nvmf.o 00:05:35.334 LINK nvmf 00:05:41.896 LINK esnap 00:05:41.896 00:05:41.896 real 1m50.031s 00:05:41.896 user 13m1.686s 00:05:41.896 sys 2m43.250s 00:05:41.896 09:26:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:41.896 09:26:36 make -- common/autotest_common.sh@10 -- $ set +x 00:05:41.896 ************************************ 00:05:41.896 END TEST make 00:05:41.896 ************************************ 00:05:41.896 09:26:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:41.896 09:26:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:41.896 09:26:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:41.896 09:26:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.896 09:26:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:41.896 09:26:36 -- pm/common@44 -- $ pid=1325894 00:05:41.896 09:26:36 -- pm/common@50 -- $ kill -TERM 1325894 00:05:41.896 09:26:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.896 09:26:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:41.896 09:26:36 -- pm/common@44 -- $ pid=1325896 00:05:41.896 09:26:36 -- pm/common@50 -- $ kill -TERM 1325896 00:05:41.896 09:26:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.896 09:26:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:41.896 09:26:36 -- pm/common@44 -- $ pid=1325898 00:05:41.896 09:26:36 -- pm/common@50 -- $ kill -TERM 1325898 00:05:41.896 09:26:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.896 09:26:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:41.896 09:26:36 -- pm/common@44 -- $ pid=1325926 00:05:41.896 09:26:36 -- pm/common@50 -- $ sudo -E kill -TERM 1325926 00:05:41.896 09:26:36 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.896 09:26:36 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.896 09:26:36 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.896 09:26:36 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.896 09:26:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.896 09:26:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.896 09:26:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.896 09:26:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.896 09:26:36 -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.896 09:26:36 -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.896 09:26:36 -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.896 09:26:36 -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.896 09:26:36 -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.896 09:26:36 -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.896 09:26:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.896 09:26:36 -- scripts/common.sh@344 -- # case "$op" in 00:05:41.896 09:26:36 -- scripts/common.sh@345 -- # : 1 00:05:41.896 09:26:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.896 09:26:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.896 09:26:36 -- scripts/common.sh@365 -- # decimal 1 00:05:41.896 09:26:36 -- scripts/common.sh@353 -- # local d=1 00:05:41.896 09:26:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.896 09:26:36 -- scripts/common.sh@355 -- # echo 1 00:05:41.896 09:26:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.896 09:26:36 -- scripts/common.sh@366 -- # decimal 2 00:05:41.896 09:26:36 -- scripts/common.sh@353 -- # local d=2 00:05:41.896 09:26:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.896 09:26:36 -- scripts/common.sh@355 -- # echo 2 00:05:41.896 09:26:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.896 09:26:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.896 09:26:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.896 09:26:36 -- scripts/common.sh@368 -- # return 0 00:05:41.896 09:26:36 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.896 09:26:36 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.896 --rc genhtml_branch_coverage=1 00:05:41.896 --rc genhtml_function_coverage=1 00:05:41.896 --rc genhtml_legend=1 00:05:41.896 --rc geninfo_all_blocks=1 00:05:41.896 --rc geninfo_unexecuted_blocks=1 00:05:41.896 00:05:41.896 ' 00:05:41.896 09:26:36 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.896 --rc genhtml_branch_coverage=1 00:05:41.896 --rc genhtml_function_coverage=1 00:05:41.896 --rc genhtml_legend=1 00:05:41.896 --rc geninfo_all_blocks=1 00:05:41.896 --rc geninfo_unexecuted_blocks=1 00:05:41.896 00:05:41.896 ' 00:05:41.896 09:26:36 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.896 --rc genhtml_branch_coverage=1 00:05:41.896 --rc genhtml_function_coverage=1 00:05:41.896 --rc genhtml_legend=1 00:05:41.896 --rc geninfo_all_blocks=1 00:05:41.896 --rc geninfo_unexecuted_blocks=1 00:05:41.896 00:05:41.896 ' 00:05:41.896 09:26:36 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.896 --rc genhtml_branch_coverage=1 00:05:41.896 --rc genhtml_function_coverage=1 00:05:41.896 --rc genhtml_legend=1 00:05:41.896 --rc geninfo_all_blocks=1 00:05:41.896 --rc geninfo_unexecuted_blocks=1 00:05:41.896 00:05:41.896 ' 00:05:41.896 09:26:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.896 09:26:36 -- nvmf/common.sh@7 -- # uname -s 00:05:41.896 09:26:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.896 09:26:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.896 09:26:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.896 09:26:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.896 09:26:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.896 09:26:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.896 09:26:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.896 09:26:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.896 09:26:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.896 09:26:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.896 09:26:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:41.896 09:26:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:41.896 09:26:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.896 09:26:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.896 09:26:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.896 09:26:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.896 09:26:36 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.896 09:26:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.896 09:26:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.896 09:26:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.896 09:26:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.896 09:26:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.897 09:26:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.897 09:26:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.897 09:26:36 -- paths/export.sh@5 -- # export PATH 00:05:41.897 09:26:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.897 09:26:36 -- nvmf/common.sh@51 -- # : 0 00:05:41.897 09:26:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.897 09:26:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.897 09:26:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.897 09:26:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.897 09:26:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.897 09:26:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.897 09:26:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.897 09:26:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.897 09:26:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.897 09:26:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:41.897 09:26:36 -- spdk/autotest.sh@32 -- # uname -s 00:05:41.897 09:26:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:41.897 09:26:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:41.897 09:26:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:41.897 09:26:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:41.897 09:26:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:41.897 09:26:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:41.897 09:26:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:41.897 09:26:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:41.897 09:26:36 -- spdk/autotest.sh@48 -- # udevadm_pid=1390623 00:05:41.897 09:26:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:41.897 09:26:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:41.897 09:26:36 -- pm/common@17 -- # local monitor 00:05:41.897 09:26:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.897 09:26:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.897 09:26:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.897 09:26:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.897 09:26:36 -- pm/common@21 -- # date +%s 00:05:41.897 09:26:36 -- pm/common@21 -- # date +%s 00:05:41.897 09:26:36 -- pm/common@25 -- # sleep 1 00:05:41.897 09:26:36 -- pm/common@21 -- # date +%s 00:05:41.897 09:26:36 -- pm/common@21 -- # date +%s 00:05:41.897 09:26:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285996 00:05:41.897 09:26:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285996 00:05:41.897 09:26:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285996 00:05:41.897 09:26:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285996 00:05:41.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285996_collect-cpu-load.pm.log 00:05:41.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285996_collect-vmstat.pm.log 00:05:41.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285996_collect-cpu-temp.pm.log 00:05:41.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285996_collect-bmc-pm.bmc.pm.log 00:05:42.830 09:26:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:42.830 09:26:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:42.830 09:26:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.830 09:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.830 09:26:37 -- spdk/autotest.sh@59 -- # create_test_list 00:05:42.830 09:26:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:42.830 09:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.830 09:26:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:42.830 09:26:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.087 09:26:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.087 09:26:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:43.087 09:26:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.087 09:26:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:43.087 09:26:37 -- common/autotest_common.sh@1455 -- # uname 00:05:43.087 09:26:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:43.087 09:26:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:43.087 09:26:37 -- common/autotest_common.sh@1475 -- # uname 00:05:43.087 09:26:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:43.087 09:26:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:43.087 09:26:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:43.087 lcov: LCOV version 1.15 00:05:43.087 09:26:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:09.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:09.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:41.711 09:27:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:41.711 09:27:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.711 09:27:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.711 09:27:32 -- spdk/autotest.sh@78 -- # rm -f 00:06:41.711 09:27:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:41.711 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:06:41.711 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:41.712 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:41.712 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:41.712 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:41.712 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:41.712 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:41.712 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:41.712 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:41.712 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:41.712 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:41.712 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:41.712 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:41.712 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:41.712 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:41.712 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:41.712 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:41.712 09:27:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:41.712 09:27:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:41.712 09:27:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:41.712 09:27:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:41.712 09:27:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.712 09:27:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:41.712 09:27:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:41.712 09:27:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.712 09:27:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.712 09:27:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:41.712 09:27:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:41.712 09:27:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:41.712 09:27:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:41.712 09:27:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:41.712 09:27:34 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:41.712 No valid GPT data, bailing 00:06:41.712 09:27:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:41.712 09:27:34 -- scripts/common.sh@394 -- # pt= 00:06:41.712 09:27:34 -- scripts/common.sh@395 -- # return 1 00:06:41.712 09:27:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:41.712 1+0 records in 00:06:41.712 1+0 records out 00:06:41.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00202696 s, 517 MB/s 00:06:41.712 09:27:34 -- spdk/autotest.sh@105 -- # sync 00:06:41.712 09:27:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:41.712 09:27:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:41.712 09:27:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:42.279 09:27:36 -- spdk/autotest.sh@111 -- # uname -s 00:06:42.279 09:27:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:42.279 09:27:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:42.279 09:27:36 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:43.655 Hugepages 00:06:43.655 node hugesize free / total 00:06:43.655 node0 1048576kB 0 / 0 00:06:43.655 node0 2048kB 0 / 0 00:06:43.655 node1 1048576kB 0 / 0 00:06:43.655 node1 2048kB 0 / 0 00:06:43.655 00:06:43.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:43.655 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:43.655 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:43.655 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:43.913 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:43.913 09:27:38 -- spdk/autotest.sh@117 -- # uname -s 00:06:43.913 09:27:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:43.913 09:27:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:43.913 09:27:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:45.287 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:45.287 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:45.287 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:46.221 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:46.479 09:27:41 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:47.414 09:27:42 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:47.414 09:27:42 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:47.414 09:27:42 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:47.414 09:27:42 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:47.414 09:27:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:47.414 09:27:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:47.414 09:27:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.414 09:27:42 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:47.414 09:27:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:47.414 09:27:42 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:47.414 09:27:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:06:47.414 09:27:42 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:48.784 Waiting for block devices as requested 00:06:48.784 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:06:48.784 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:49.042 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:49.042 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:49.042 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:49.300 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:49.300 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:49.300 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:49.300 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:49.558 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:49.558 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:49.558 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:49.558 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:49.816 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:49.816 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:49.816 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:50.074 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:50.074 09:27:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:50.074 09:27:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1485 -- # grep 0000:82:00.0/nvme/nvme 00:06:50.074 09:27:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:06:50.074 09:27:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:50.074 09:27:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:50.074 09:27:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:50.074 09:27:44 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:06:50.074 09:27:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:50.074 09:27:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:50.074 09:27:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:50.074 09:27:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:50.074 09:27:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:50.074 09:27:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:50.074 09:27:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:50.074 09:27:44 -- common/autotest_common.sh@1541 -- # continue 00:06:50.074 09:27:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:50.074 09:27:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.074 09:27:44 -- common/autotest_common.sh@10 -- # set +x 00:06:50.075 09:27:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:50.075 09:27:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.075 09:27:44 -- common/autotest_common.sh@10 -- # set +x 00:06:50.075 09:27:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:51.975 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:51.975 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:51.975 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:52.542 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:52.800 09:27:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:52.800 09:27:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.800 09:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.800 09:27:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:52.800 09:27:47 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:52.800 09:27:47 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:52.800 09:27:47 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:52.800 09:27:47 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:52.800 09:27:47 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:52.800 09:27:47 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:52.800 09:27:47 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:52.800 09:27:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:52.800 09:27:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:52.800 09:27:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:52.800 09:27:47 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:52.800 09:27:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:52.800 09:27:47 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:52.800 09:27:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:06:52.800 09:27:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:52.800 09:27:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:06:52.800 09:27:47 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:06:52.800 09:27:47 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:52.800 09:27:47 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:06:52.800 09:27:47 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:06:52.800 09:27:47 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:82:00.0 00:06:52.800 09:27:47 -- common/autotest_common.sh@1577 -- # [[ -z 0000:82:00.0 ]] 00:06:52.800 09:27:47 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1403976 00:06:52.800 09:27:47 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:52.800 09:27:47 -- common/autotest_common.sh@1583 -- # waitforlisten 1403976 00:06:52.800 09:27:47 -- common/autotest_common.sh@831 -- # '[' -z 1403976 ']' 00:06:52.800 09:27:47 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.800 09:27:47 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.800 09:27:47 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.800 09:27:47 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.800 09:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:53.059 [2024-10-07 09:27:47.689361] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:06:53.059 [2024-10-07 09:27:47.689503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403976 ] 00:06:53.059 [2024-10-07 09:27:47.773990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.318 [2024-10-07 09:27:47.901779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 09:27:48 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.576 09:27:48 -- common/autotest_common.sh@864 -- # return 0 00:06:53.576 09:27:48 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:53.576 09:27:48 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:53.576 09:27:48 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:06:56.859 nvme0n1 00:06:56.859 09:27:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:56.859 [2024-10-07 09:27:51.655875] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:56.859 [2024-10-07 09:27:51.655930] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:56.859 request: 00:06:56.859 { 00:06:56.859 "nvme_ctrlr_name": "nvme0", 00:06:56.859 "password": "test", 00:06:56.859 "method": "bdev_nvme_opal_revert", 00:06:56.859 "req_id": 1 00:06:56.859 } 00:06:56.859 Got JSON-RPC error response 00:06:56.859 response: 00:06:56.859 { 00:06:56.859 "code": -32603, 00:06:56.859 "message": "Internal error" 00:06:56.859 } 00:06:56.859 09:27:51 -- common/autotest_common.sh@1589 -- # true 00:06:56.859 09:27:51 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:56.859 09:27:51 -- common/autotest_common.sh@1593 -- # killprocess 1403976 00:06:56.859 09:27:51 -- common/autotest_common.sh@950 -- # '[' -z 1403976 ']' 00:06:56.859 09:27:51 -- common/autotest_common.sh@954 -- # kill -0 1403976 00:06:56.859 09:27:51 -- common/autotest_common.sh@955 -- # uname 00:06:56.859 09:27:51 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.859 09:27:51 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1403976 00:06:57.118 09:27:51 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.118 09:27:51 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.118 09:27:51 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1403976' 00:06:57.118 killing process with pid 1403976 00:06:57.118 09:27:51 -- common/autotest_common.sh@969 -- # kill 1403976 00:06:57.118 09:27:51 -- common/autotest_common.sh@974 -- # wait 1403976 00:06:59.017 09:27:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:59.017 09:27:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:59.017 09:27:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:59.017 09:27:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:59.017 09:27:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:59.017 09:27:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.017 09:27:53 -- common/autotest_common.sh@10 -- # set +x 00:06:59.017 09:27:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:59.017 09:27:53 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.017 09:27:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.017 09:27:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.017 09:27:53 -- common/autotest_common.sh@10 -- # set +x 00:06:59.017 ************************************ 00:06:59.017 START TEST env 00:06:59.017 ************************************ 00:06:59.017 09:27:53 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.017 * Looking for test storage... 00:06:59.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:59.017 09:27:53 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.017 09:27:53 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.017 09:27:53 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.275 09:27:53 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.275 09:27:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.275 09:27:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.275 09:27:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.275 09:27:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.275 09:27:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.275 09:27:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.275 09:27:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.275 09:27:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.275 09:27:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.275 09:27:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.275 09:27:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.275 09:27:53 env -- scripts/common.sh@344 -- # case "$op" in 00:06:59.275 09:27:53 env -- scripts/common.sh@345 -- # : 1 00:06:59.275 09:27:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.275 09:27:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.275 09:27:53 env -- scripts/common.sh@365 -- # decimal 1 00:06:59.275 09:27:53 env -- scripts/common.sh@353 -- # local d=1 00:06:59.275 09:27:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.275 09:27:53 env -- scripts/common.sh@355 -- # echo 1 00:06:59.275 09:27:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.275 09:27:53 env -- scripts/common.sh@366 -- # decimal 2 00:06:59.275 09:27:53 env -- scripts/common.sh@353 -- # local d=2 00:06:59.275 09:27:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.275 09:27:53 env -- scripts/common.sh@355 -- # echo 2 00:06:59.275 09:27:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.275 09:27:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.275 09:27:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.275 09:27:53 env -- scripts/common.sh@368 -- # return 0 00:06:59.275 09:27:53 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.275 09:27:53 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.275 --rc geninfo_unexecuted_blocks=1 00:06:59.275 00:06:59.275 ' 00:06:59.275 09:27:53 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.275 --rc genhtml_branch_coverage=1 00:06:59.275 --rc genhtml_function_coverage=1 00:06:59.275 --rc genhtml_legend=1 00:06:59.275 --rc geninfo_all_blocks=1 00:06:59.276 --rc geninfo_unexecuted_blocks=1 00:06:59.276 00:06:59.276 ' 00:06:59.276 09:27:53 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.276 --rc genhtml_branch_coverage=1 00:06:59.276 --rc genhtml_function_coverage=1 00:06:59.276 --rc genhtml_legend=1 00:06:59.276 --rc geninfo_all_blocks=1 00:06:59.276 --rc geninfo_unexecuted_blocks=1 00:06:59.276 00:06:59.276 ' 00:06:59.276 09:27:53 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.276 --rc genhtml_branch_coverage=1 00:06:59.276 --rc genhtml_function_coverage=1 00:06:59.276 --rc genhtml_legend=1 00:06:59.276 --rc geninfo_all_blocks=1 00:06:59.276 --rc geninfo_unexecuted_blocks=1 00:06:59.276 00:06:59.276 ' 00:06:59.276 09:27:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.276 09:27:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.276 09:27:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.276 09:27:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.276 ************************************ 00:06:59.276 START TEST env_memory 00:06:59.276 ************************************ 00:06:59.276 09:27:53 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.276 00:06:59.276 00:06:59.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.276 http://cunit.sourceforge.net/ 00:06:59.276 00:06:59.276 00:06:59.276 Suite: memory 00:06:59.276 Test: alloc and free memory map ...[2024-10-07 09:27:53.995716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:59.276 passed 00:06:59.276 Test: mem map translation ...[2024-10-07 09:27:54.044052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:59.276 [2024-10-07 09:27:54.044078] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:59.276 [2024-10-07 09:27:54.044130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:59.276 [2024-10-07 09:27:54.044145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:59.534 passed 00:06:59.535 Test: mem map registration ...[2024-10-07 09:27:54.125200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:59.535 [2024-10-07 09:27:54.125257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:59.535 passed 00:06:59.535 Test: mem map adjacent registrations ...passed 00:06:59.535 00:06:59.535 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.535 suites 1 1 n/a 0 0 00:06:59.535 tests 4 4 4 0 0 00:06:59.535 asserts 152 152 152 0 n/a 00:06:59.535 00:06:59.535 Elapsed time = 0.316 seconds 00:06:59.535 00:06:59.535 real 0m0.329s 00:06:59.535 user 0m0.317s 00:06:59.535 sys 0m0.011s 00:06:59.535 09:27:54 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.535 09:27:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:59.535 ************************************ 00:06:59.535 END TEST env_memory 00:06:59.535 ************************************ 00:06:59.535 09:27:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.535 09:27:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.535 09:27:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.535 09:27:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.535 ************************************ 00:06:59.535 START TEST env_vtophys 00:06:59.535 ************************************ 00:06:59.535 09:27:54 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.535 EAL: lib.eal log level changed from notice to debug 00:06:59.535 EAL: Detected lcore 0 as core 0 on socket 0 00:06:59.535 EAL: Detected lcore 1 as core 1 on socket 0 00:06:59.535 EAL: Detected lcore 2 as core 2 on socket 0 00:06:59.535 EAL: Detected lcore 3 as core 3 on socket 0 00:06:59.535 EAL: Detected lcore 4 as core 4 on socket 0 00:06:59.535 EAL: Detected lcore 5 as core 5 on socket 0 00:06:59.535 EAL: Detected lcore 6 as core 8 on socket 0 00:06:59.535 EAL: Detected lcore 7 as core 9 on socket 0 00:06:59.535 EAL: Detected lcore 8 as core 10 on socket 0 00:06:59.535 EAL: Detected lcore 9 as core 11 on socket 0 00:06:59.535 EAL: Detected lcore 10 as core 12 on socket 0 00:06:59.535 EAL: Detected lcore 11 as core 13 on socket 0 00:06:59.535 EAL: Detected lcore 12 as core 0 on socket 1 00:06:59.535 EAL: Detected lcore 13 as core 1 on socket 1 00:06:59.535 EAL: Detected lcore 14 as core 2 on socket 1 00:06:59.535 EAL: Detected lcore 15 as core 3 on socket 1 00:06:59.535 EAL: Detected lcore 16 as core 4 on socket 1 00:06:59.535 EAL: Detected lcore 17 as core 5 on socket 1 00:06:59.535 EAL: Detected lcore 18 as core 8 on socket 1 00:06:59.535 EAL: Detected lcore 19 as core 9 on socket 1 00:06:59.535 EAL: Detected lcore 20 as core 10 on socket 1 00:06:59.535 EAL: Detected lcore 21 as core 11 on socket 1 00:06:59.535 EAL: Detected lcore 22 as core 12 on socket 1 00:06:59.535 EAL: Detected lcore 23 as core 13 on socket 1 00:06:59.535 EAL: Detected lcore 24 as core 0 on socket 0 00:06:59.535 EAL: Detected lcore 25 as core 1 on socket 0 00:06:59.535 EAL: Detected lcore 26 as core 2 on socket 0 00:06:59.535 EAL: Detected lcore 27 as core 3 on socket 0 00:06:59.535 EAL: Detected lcore 28 as core 4 on socket 0 00:06:59.535 EAL: Detected lcore 29 as core 5 on socket 0 00:06:59.535 EAL: Detected lcore 30 as core 8 on socket 0 00:06:59.535 EAL: Detected lcore 31 as core 9 on socket 0 00:06:59.535 EAL: Detected lcore 32 as core 10 on socket 0 00:06:59.535 EAL: Detected lcore 33 as core 11 on socket 0 00:06:59.535 EAL: Detected lcore 34 as core 12 on socket 0 00:06:59.535 EAL: Detected lcore 35 as core 13 on socket 0 00:06:59.535 EAL: Detected lcore 36 as core 0 on socket 1 00:06:59.535 EAL: Detected lcore 37 as core 1 on socket 1 00:06:59.535 EAL: Detected lcore 38 as core 2 on socket 1 00:06:59.535 EAL: Detected lcore 39 as core 3 on socket 1 00:06:59.535 EAL: Detected lcore 40 as core 4 on socket 1 00:06:59.535 EAL: Detected lcore 41 as core 5 on socket 1 00:06:59.535 EAL: Detected lcore 42 as core 8 on socket 1 00:06:59.535 EAL: Detected lcore 43 as core 9 on socket 1 00:06:59.535 EAL: Detected lcore 44 as core 10 on socket 1 00:06:59.535 EAL: Detected lcore 45 as core 11 on socket 1 00:06:59.535 EAL: Detected lcore 46 as core 12 on socket 1 00:06:59.535 EAL: Detected lcore 47 as core 13 on socket 1 00:06:59.794 EAL: Maximum logical cores by configuration: 128 00:06:59.794 EAL: Detected CPU lcores: 48 00:06:59.794 EAL: Detected NUMA nodes: 2 00:06:59.794 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:59.794 EAL: Detected shared linkage of DPDK 00:06:59.794 EAL: No shared files mode enabled, IPC will be disabled 00:06:59.794 EAL: Bus pci wants IOVA as 'DC' 00:06:59.794 EAL: Buses did not request a specific IOVA mode. 00:06:59.794 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:59.795 EAL: Selected IOVA mode 'VA' 00:06:59.795 EAL: Probing VFIO support... 00:06:59.795 EAL: IOMMU type 1 (Type 1) is supported 00:06:59.795 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:59.795 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:59.795 EAL: VFIO support initialized 00:06:59.795 EAL: Ask a virtual area of 0x2e000 bytes 00:06:59.795 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:59.795 EAL: Setting up physically contiguous memory... 00:06:59.795 EAL: Setting maximum number of open files to 524288 00:06:59.795 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:59.795 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:59.795 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:59.795 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:59.795 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.795 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:59.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.795 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.795 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:59.795 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:59.795 EAL: Hugepages will be freed exactly as allocated. 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: TSC frequency is ~2700000 KHz 00:06:59.795 EAL: Main lcore 0 is ready (tid=7f2d6c898a00;cpuset=[0]) 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 0 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 2MB 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:59.795 EAL: Mem event callback 'spdk:(nil)' registered 00:06:59.795 00:06:59.795 00:06:59.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.795 http://cunit.sourceforge.net/ 00:06:59.795 00:06:59.795 00:06:59.795 Suite: components_suite 00:06:59.795 Test: vtophys_malloc_test ...passed 00:06:59.795 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 4MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 4MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 6MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 6MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 10MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 10MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 18MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 18MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 34MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 34MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 66MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 66MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.795 EAL: Restoring previous memory policy: 4 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was expanded by 130MB 00:06:59.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.795 EAL: request: mp_malloc_sync 00:06:59.795 EAL: No shared files mode enabled, IPC is disabled 00:06:59.795 EAL: Heap on socket 0 was shrunk by 130MB 00:06:59.795 EAL: Trying to obtain current memory policy. 00:06:59.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.054 EAL: Restoring previous memory policy: 4 00:07:00.054 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.054 EAL: request: mp_malloc_sync 00:07:00.054 EAL: No shared files mode enabled, IPC is disabled 00:07:00.054 EAL: Heap on socket 0 was expanded by 258MB 00:07:00.054 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.054 EAL: request: mp_malloc_sync 00:07:00.054 EAL: No shared files mode enabled, IPC is disabled 00:07:00.054 EAL: Heap on socket 0 was shrunk by 258MB 00:07:00.054 EAL: Trying to obtain current memory policy. 00:07:00.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.312 EAL: Restoring previous memory policy: 4 00:07:00.312 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.312 EAL: request: mp_malloc_sync 00:07:00.312 EAL: No shared files mode enabled, IPC is disabled 00:07:00.312 EAL: Heap on socket 0 was expanded by 514MB 00:07:00.312 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.572 EAL: request: mp_malloc_sync 00:07:00.572 EAL: No shared files mode enabled, IPC is disabled 00:07:00.572 EAL: Heap on socket 0 was shrunk by 514MB 00:07:00.572 EAL: Trying to obtain current memory policy. 00:07:00.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.894 EAL: Restoring previous memory policy: 4 00:07:00.894 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.894 EAL: request: mp_malloc_sync 00:07:00.894 EAL: No shared files mode enabled, IPC is disabled 00:07:00.894 EAL: Heap on socket 0 was expanded by 1026MB 00:07:01.175 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.175 EAL: request: mp_malloc_sync 00:07:01.175 EAL: No shared files mode enabled, IPC is disabled 00:07:01.175 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:01.175 passed 00:07:01.175 00:07:01.175 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.175 suites 1 1 n/a 0 0 00:07:01.175 tests 2 2 2 0 0 00:07:01.175 asserts 497 497 497 0 n/a 00:07:01.175 00:07:01.175 Elapsed time = 1.442 seconds 00:07:01.175 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.175 EAL: request: mp_malloc_sync 00:07:01.175 EAL: No shared files mode enabled, IPC is disabled 00:07:01.175 EAL: Heap on socket 0 was shrunk by 2MB 00:07:01.175 EAL: No shared files mode enabled, IPC is disabled 00:07:01.175 EAL: No shared files mode enabled, IPC is disabled 00:07:01.175 EAL: No shared files mode enabled, IPC is disabled 00:07:01.175 00:07:01.175 real 0m1.597s 00:07:01.175 user 0m0.901s 00:07:01.175 sys 0m0.661s 00:07:01.175 09:27:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.175 09:27:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 ************************************ 00:07:01.175 END TEST env_vtophys 00:07:01.175 ************************************ 00:07:01.175 09:27:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.175 09:27:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.175 09:27:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.175 09:27:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 ************************************ 00:07:01.175 START TEST env_pci 00:07:01.175 ************************************ 00:07:01.175 09:27:55 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.434 00:07:01.434 00:07:01.434 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.434 http://cunit.sourceforge.net/ 00:07:01.434 00:07:01.434 00:07:01.434 Suite: pci 00:07:01.434 Test: pci_hook ...[2024-10-07 09:27:55.997849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1405001 has claimed it 00:07:01.434 EAL: Cannot find device (10000:00:01.0) 00:07:01.434 EAL: Failed to attach device on primary process 00:07:01.434 passed 00:07:01.434 00:07:01.434 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.434 suites 1 1 n/a 0 0 00:07:01.434 tests 1 1 1 0 0 00:07:01.434 asserts 25 25 25 0 n/a 00:07:01.434 00:07:01.434 Elapsed time = 0.040 seconds 00:07:01.434 00:07:01.434 real 0m0.061s 00:07:01.434 user 0m0.018s 00:07:01.434 sys 0m0.042s 00:07:01.434 09:27:56 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.434 09:27:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:01.434 ************************************ 00:07:01.434 END TEST env_pci 00:07:01.434 ************************************ 00:07:01.434 09:27:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:01.434 09:27:56 env -- env/env.sh@15 -- # uname 00:07:01.434 09:27:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:01.434 09:27:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:01.434 09:27:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.434 09:27:56 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:01.434 09:27:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.434 09:27:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.435 ************************************ 00:07:01.435 START TEST env_dpdk_post_init 00:07:01.435 ************************************ 00:07:01.435 09:27:56 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.435 EAL: Detected CPU lcores: 48 00:07:01.435 EAL: Detected NUMA nodes: 2 00:07:01.435 EAL: Detected shared linkage of DPDK 00:07:01.435 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:01.435 EAL: Selected IOVA mode 'VA' 00:07:01.435 EAL: VFIO support initialized 00:07:01.435 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:01.435 EAL: Using IOMMU type 1 (Type 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:01.694 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:02.628 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:07:05.910 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:07:05.910 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:07:05.910 Starting DPDK initialization... 00:07:05.910 Starting SPDK post initialization... 00:07:05.910 SPDK NVMe probe 00:07:05.910 Attaching to 0000:82:00.0 00:07:05.910 Attached to 0000:82:00.0 00:07:05.910 Cleaning up... 00:07:05.910 00:07:05.910 real 0m4.481s 00:07:05.910 user 0m3.075s 00:07:05.910 sys 0m0.458s 00:07:05.910 09:28:00 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.910 09:28:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.910 ************************************ 00:07:05.910 END TEST env_dpdk_post_init 00:07:05.910 ************************************ 00:07:05.910 09:28:00 env -- env/env.sh@26 -- # uname 00:07:05.910 09:28:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:05.910 09:28:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.910 09:28:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.910 09:28:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.910 09:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.910 ************************************ 00:07:05.910 START TEST env_mem_callbacks 00:07:05.910 ************************************ 00:07:05.910 09:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.910 EAL: Detected CPU lcores: 48 00:07:05.910 EAL: Detected NUMA nodes: 2 00:07:05.910 EAL: Detected shared linkage of DPDK 00:07:05.910 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.910 EAL: Selected IOVA mode 'VA' 00:07:05.910 EAL: VFIO support initialized 00:07:06.171 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:06.171 00:07:06.171 00:07:06.171 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.171 http://cunit.sourceforge.net/ 00:07:06.171 00:07:06.171 00:07:06.171 Suite: memory 00:07:06.171 Test: test ... 00:07:06.171 register 0x200000200000 2097152 00:07:06.171 malloc 3145728 00:07:06.171 register 0x200000400000 4194304 00:07:06.171 buf 0x200000500000 len 3145728 PASSED 00:07:06.171 malloc 64 00:07:06.171 buf 0x2000004fff40 len 64 PASSED 00:07:06.171 malloc 4194304 00:07:06.171 register 0x200000800000 6291456 00:07:06.171 buf 0x200000a00000 len 4194304 PASSED 00:07:06.171 free 0x200000500000 3145728 00:07:06.171 free 0x2000004fff40 64 00:07:06.171 unregister 0x200000400000 4194304 PASSED 00:07:06.171 free 0x200000a00000 4194304 00:07:06.171 unregister 0x200000800000 6291456 PASSED 00:07:06.171 malloc 8388608 00:07:06.171 register 0x200000400000 10485760 00:07:06.171 buf 0x200000600000 len 8388608 PASSED 00:07:06.171 free 0x200000600000 8388608 00:07:06.171 unregister 0x200000400000 10485760 PASSED 00:07:06.171 passed 00:07:06.171 00:07:06.171 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.171 suites 1 1 n/a 0 0 00:07:06.171 tests 1 1 1 0 0 00:07:06.171 asserts 15 15 15 0 n/a 00:07:06.171 00:07:06.171 Elapsed time = 0.006 seconds 00:07:06.171 00:07:06.171 real 0m0.085s 00:07:06.171 user 0m0.027s 00:07:06.171 sys 0m0.057s 00:07:06.171 09:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.171 09:28:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 ************************************ 00:07:06.171 END TEST env_mem_callbacks 00:07:06.171 ************************************ 00:07:06.171 00:07:06.171 real 0m7.092s 00:07:06.171 user 0m4.605s 00:07:06.171 sys 0m1.529s 00:07:06.171 09:28:00 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.171 09:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 ************************************ 00:07:06.171 END TEST env 00:07:06.171 ************************************ 00:07:06.171 09:28:00 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.171 09:28:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.171 09:28:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.171 09:28:00 -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 ************************************ 00:07:06.171 START TEST rpc 00:07:06.171 ************************************ 00:07:06.171 09:28:00 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.171 * Looking for test storage... 00:07:06.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.171 09:28:00 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.171 09:28:00 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.171 09:28:00 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.431 09:28:01 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.431 09:28:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.431 09:28:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.431 09:28:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.431 09:28:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.431 09:28:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.431 09:28:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:06.431 09:28:01 rpc -- scripts/common.sh@345 -- # : 1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.431 09:28:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.431 09:28:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@353 -- # local d=1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.431 09:28:01 rpc -- scripts/common.sh@355 -- # echo 1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.431 09:28:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@353 -- # local d=2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.431 09:28:01 rpc -- scripts/common.sh@355 -- # echo 2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.431 09:28:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.431 09:28:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.431 09:28:01 rpc -- scripts/common.sh@368 -- # return 0 00:07:06.431 09:28:01 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.431 09:28:01 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.431 --rc genhtml_branch_coverage=1 00:07:06.431 --rc genhtml_function_coverage=1 00:07:06.431 --rc genhtml_legend=1 00:07:06.431 --rc geninfo_all_blocks=1 00:07:06.431 --rc geninfo_unexecuted_blocks=1 00:07:06.431 00:07:06.431 ' 00:07:06.431 09:28:01 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.431 --rc genhtml_branch_coverage=1 00:07:06.431 --rc genhtml_function_coverage=1 00:07:06.431 --rc genhtml_legend=1 00:07:06.431 --rc geninfo_all_blocks=1 00:07:06.431 --rc geninfo_unexecuted_blocks=1 00:07:06.431 00:07:06.431 ' 00:07:06.431 09:28:01 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.431 --rc genhtml_branch_coverage=1 00:07:06.432 --rc genhtml_function_coverage=1 00:07:06.432 --rc genhtml_legend=1 00:07:06.432 --rc geninfo_all_blocks=1 00:07:06.432 --rc geninfo_unexecuted_blocks=1 00:07:06.432 00:07:06.432 ' 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.432 --rc genhtml_branch_coverage=1 00:07:06.432 --rc genhtml_function_coverage=1 00:07:06.432 --rc genhtml_legend=1 00:07:06.432 --rc geninfo_all_blocks=1 00:07:06.432 --rc geninfo_unexecuted_blocks=1 00:07:06.432 00:07:06.432 ' 00:07:06.432 09:28:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1405673 00:07:06.432 09:28:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:06.432 09:28:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.432 09:28:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1405673 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@831 -- # '[' -z 1405673 ']' 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.432 09:28:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.432 [2024-10-07 09:28:01.191379] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:06.432 [2024-10-07 09:28:01.191584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405673 ] 00:07:06.690 [2024-10-07 09:28:01.296799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.690 [2024-10-07 09:28:01.421343] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:06.690 [2024-10-07 09:28:01.421397] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1405673' to capture a snapshot of events at runtime. 00:07:06.690 [2024-10-07 09:28:01.421414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.690 [2024-10-07 09:28:01.421428] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.690 [2024-10-07 09:28:01.421440] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1405673 for offline analysis/debug. 00:07:06.690 [2024-10-07 09:28:01.422139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.948 09:28:01 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.948 09:28:01 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.948 09:28:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.948 09:28:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.948 09:28:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:06.948 09:28:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:06.948 09:28:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.948 09:28:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.948 09:28:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.205 ************************************ 00:07:07.205 START TEST rpc_integrity 00:07:07.205 ************************************ 00:07:07.205 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:07.205 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.205 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.205 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.206 { 00:07:07.206 "name": "Malloc0", 00:07:07.206 "aliases": [ 00:07:07.206 "50190a54-2e9f-4e85-a632-815b02125bbd" 00:07:07.206 ], 00:07:07.206 "product_name": "Malloc disk", 00:07:07.206 "block_size": 512, 00:07:07.206 "num_blocks": 16384, 00:07:07.206 "uuid": "50190a54-2e9f-4e85-a632-815b02125bbd", 00:07:07.206 "assigned_rate_limits": { 00:07:07.206 "rw_ios_per_sec": 0, 00:07:07.206 "rw_mbytes_per_sec": 0, 00:07:07.206 "r_mbytes_per_sec": 0, 00:07:07.206 "w_mbytes_per_sec": 0 00:07:07.206 }, 00:07:07.206 "claimed": false, 00:07:07.206 "zoned": false, 00:07:07.206 "supported_io_types": { 00:07:07.206 "read": true, 00:07:07.206 "write": true, 00:07:07.206 "unmap": true, 00:07:07.206 "flush": true, 00:07:07.206 "reset": true, 00:07:07.206 "nvme_admin": false, 00:07:07.206 "nvme_io": false, 00:07:07.206 "nvme_io_md": false, 00:07:07.206 "write_zeroes": true, 00:07:07.206 "zcopy": true, 00:07:07.206 "get_zone_info": false, 00:07:07.206 "zone_management": false, 00:07:07.206 "zone_append": false, 00:07:07.206 "compare": false, 00:07:07.206 "compare_and_write": false, 00:07:07.206 "abort": true, 00:07:07.206 "seek_hole": false, 00:07:07.206 "seek_data": false, 00:07:07.206 "copy": true, 00:07:07.206 "nvme_iov_md": false 00:07:07.206 }, 00:07:07.206 "memory_domains": [ 00:07:07.206 { 00:07:07.206 "dma_device_id": "system", 00:07:07.206 "dma_device_type": 1 00:07:07.206 }, 00:07:07.206 { 00:07:07.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.206 "dma_device_type": 2 00:07:07.206 } 00:07:07.206 ], 00:07:07.206 "driver_specific": {} 00:07:07.206 } 00:07:07.206 ]' 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.206 [2024-10-07 09:28:01.881952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:07.206 [2024-10-07 09:28:01.882001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.206 [2024-10-07 09:28:01.882027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x961040 00:07:07.206 [2024-10-07 09:28:01.882043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.206 [2024-10-07 09:28:01.883580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.206 [2024-10-07 09:28:01.883607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.206 Passthru0 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.206 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.206 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.206 { 00:07:07.206 "name": "Malloc0", 00:07:07.206 "aliases": [ 00:07:07.206 "50190a54-2e9f-4e85-a632-815b02125bbd" 00:07:07.206 ], 00:07:07.206 "product_name": "Malloc disk", 00:07:07.206 "block_size": 512, 00:07:07.206 "num_blocks": 16384, 00:07:07.206 "uuid": "50190a54-2e9f-4e85-a632-815b02125bbd", 00:07:07.206 "assigned_rate_limits": { 00:07:07.206 "rw_ios_per_sec": 0, 00:07:07.206 "rw_mbytes_per_sec": 0, 00:07:07.206 "r_mbytes_per_sec": 0, 00:07:07.206 "w_mbytes_per_sec": 0 00:07:07.206 }, 00:07:07.206 "claimed": true, 00:07:07.206 "claim_type": "exclusive_write", 00:07:07.206 "zoned": false, 00:07:07.206 "supported_io_types": { 00:07:07.206 "read": true, 00:07:07.206 "write": true, 00:07:07.206 "unmap": true, 00:07:07.206 "flush": true, 00:07:07.206 "reset": true, 00:07:07.206 "nvme_admin": false, 00:07:07.206 "nvme_io": false, 00:07:07.206 "nvme_io_md": false, 00:07:07.206 "write_zeroes": true, 00:07:07.206 "zcopy": true, 00:07:07.206 "get_zone_info": false, 00:07:07.206 "zone_management": false, 00:07:07.206 "zone_append": false, 00:07:07.206 "compare": false, 00:07:07.206 "compare_and_write": false, 00:07:07.206 "abort": true, 00:07:07.206 "seek_hole": false, 00:07:07.206 "seek_data": false, 00:07:07.206 "copy": true, 00:07:07.206 "nvme_iov_md": false 00:07:07.206 }, 00:07:07.206 "memory_domains": [ 00:07:07.206 { 00:07:07.206 "dma_device_id": "system", 00:07:07.206 "dma_device_type": 1 00:07:07.206 }, 00:07:07.206 { 00:07:07.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.206 "dma_device_type": 2 00:07:07.206 } 00:07:07.206 ], 00:07:07.206 "driver_specific": {} 00:07:07.206 }, 00:07:07.206 { 00:07:07.206 "name": "Passthru0", 00:07:07.206 "aliases": [ 00:07:07.206 "a1feb083-b60a-5f38-985d-f6f127e7b58f" 00:07:07.206 ], 00:07:07.206 "product_name": "passthru", 00:07:07.206 "block_size": 512, 00:07:07.207 "num_blocks": 16384, 00:07:07.207 "uuid": "a1feb083-b60a-5f38-985d-f6f127e7b58f", 00:07:07.207 "assigned_rate_limits": { 00:07:07.207 "rw_ios_per_sec": 0, 00:07:07.207 "rw_mbytes_per_sec": 0, 00:07:07.207 "r_mbytes_per_sec": 0, 00:07:07.207 "w_mbytes_per_sec": 0 00:07:07.207 }, 00:07:07.207 "claimed": false, 00:07:07.207 "zoned": false, 00:07:07.207 "supported_io_types": { 00:07:07.207 "read": true, 00:07:07.207 "write": true, 00:07:07.207 "unmap": true, 00:07:07.207 "flush": true, 00:07:07.207 "reset": true, 00:07:07.207 "nvme_admin": false, 00:07:07.207 "nvme_io": false, 00:07:07.207 "nvme_io_md": false, 00:07:07.207 "write_zeroes": true, 00:07:07.207 "zcopy": true, 00:07:07.207 "get_zone_info": false, 00:07:07.207 "zone_management": false, 00:07:07.207 "zone_append": false, 00:07:07.207 "compare": false, 00:07:07.207 "compare_and_write": false, 00:07:07.207 "abort": true, 00:07:07.207 "seek_hole": false, 00:07:07.207 "seek_data": false, 00:07:07.207 "copy": true, 00:07:07.207 "nvme_iov_md": false 00:07:07.207 }, 00:07:07.207 "memory_domains": [ 00:07:07.207 { 00:07:07.207 "dma_device_id": "system", 00:07:07.207 "dma_device_type": 1 00:07:07.207 }, 00:07:07.207 { 00:07:07.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.207 "dma_device_type": 2 00:07:07.207 } 00:07:07.207 ], 00:07:07.207 "driver_specific": { 00:07:07.207 "passthru": { 00:07:07.207 "name": "Passthru0", 00:07:07.207 "base_bdev_name": "Malloc0" 00:07:07.207 } 00:07:07.207 } 00:07:07.207 } 00:07:07.207 ]' 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.207 09:28:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.207 09:28:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.207 09:28:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.207 00:07:07.207 real 0m0.240s 00:07:07.207 user 0m0.156s 00:07:07.207 sys 0m0.026s 00:07:07.207 09:28:02 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.207 09:28:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.207 ************************************ 00:07:07.207 END TEST rpc_integrity 00:07:07.207 ************************************ 00:07:07.465 09:28:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 ************************************ 00:07:07.465 START TEST rpc_plugins 00:07:07.465 ************************************ 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:07.465 { 00:07:07.465 "name": "Malloc1", 00:07:07.465 "aliases": [ 00:07:07.465 "e7c4efdf-844a-48b6-992c-260d0697d587" 00:07:07.465 ], 00:07:07.465 "product_name": "Malloc disk", 00:07:07.465 "block_size": 4096, 00:07:07.465 "num_blocks": 256, 00:07:07.465 "uuid": "e7c4efdf-844a-48b6-992c-260d0697d587", 00:07:07.465 "assigned_rate_limits": { 00:07:07.465 "rw_ios_per_sec": 0, 00:07:07.465 "rw_mbytes_per_sec": 0, 00:07:07.465 "r_mbytes_per_sec": 0, 00:07:07.465 "w_mbytes_per_sec": 0 00:07:07.465 }, 00:07:07.465 "claimed": false, 00:07:07.465 "zoned": false, 00:07:07.465 "supported_io_types": { 00:07:07.465 "read": true, 00:07:07.465 "write": true, 00:07:07.465 "unmap": true, 00:07:07.465 "flush": true, 00:07:07.465 "reset": true, 00:07:07.465 "nvme_admin": false, 00:07:07.465 "nvme_io": false, 00:07:07.465 "nvme_io_md": false, 00:07:07.465 "write_zeroes": true, 00:07:07.465 "zcopy": true, 00:07:07.465 "get_zone_info": false, 00:07:07.465 "zone_management": false, 00:07:07.465 "zone_append": false, 00:07:07.465 "compare": false, 00:07:07.465 "compare_and_write": false, 00:07:07.465 "abort": true, 00:07:07.465 "seek_hole": false, 00:07:07.465 "seek_data": false, 00:07:07.465 "copy": true, 00:07:07.465 "nvme_iov_md": false 00:07:07.465 }, 00:07:07.465 "memory_domains": [ 00:07:07.465 { 00:07:07.465 "dma_device_id": "system", 00:07:07.465 "dma_device_type": 1 00:07:07.465 }, 00:07:07.465 { 00:07:07.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.465 "dma_device_type": 2 00:07:07.465 } 00:07:07.465 ], 00:07:07.465 "driver_specific": {} 00:07:07.465 } 00:07:07.465 ]' 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:07.465 09:28:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:07.465 00:07:07.465 real 0m0.118s 00:07:07.465 user 0m0.081s 00:07:07.465 sys 0m0.006s 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 ************************************ 00:07:07.465 END TEST rpc_plugins 00:07:07.465 ************************************ 00:07:07.465 09:28:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.465 09:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 ************************************ 00:07:07.465 START TEST rpc_trace_cmd_test 00:07:07.465 ************************************ 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:07.465 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1405673", 00:07:07.465 "tpoint_group_mask": "0x8", 00:07:07.465 "iscsi_conn": { 00:07:07.465 "mask": "0x2", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "scsi": { 00:07:07.465 "mask": "0x4", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "bdev": { 00:07:07.465 "mask": "0x8", 00:07:07.465 "tpoint_mask": "0xffffffffffffffff" 00:07:07.465 }, 00:07:07.465 "nvmf_rdma": { 00:07:07.465 "mask": "0x10", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "nvmf_tcp": { 00:07:07.465 "mask": "0x20", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "ftl": { 00:07:07.465 "mask": "0x40", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "blobfs": { 00:07:07.465 "mask": "0x80", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "dsa": { 00:07:07.465 "mask": "0x200", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "thread": { 00:07:07.465 "mask": "0x400", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "nvme_pcie": { 00:07:07.465 "mask": "0x800", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "iaa": { 00:07:07.465 "mask": "0x1000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "nvme_tcp": { 00:07:07.465 "mask": "0x2000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "bdev_nvme": { 00:07:07.465 "mask": "0x4000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "sock": { 00:07:07.465 "mask": "0x8000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "blob": { 00:07:07.465 "mask": "0x10000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "bdev_raid": { 00:07:07.465 "mask": "0x20000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 }, 00:07:07.465 "scheduler": { 00:07:07.465 "mask": "0x40000", 00:07:07.465 "tpoint_mask": "0x0" 00:07:07.465 } 00:07:07.465 }' 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:07.465 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:07.723 00:07:07.723 real 0m0.207s 00:07:07.723 user 0m0.186s 00:07:07.723 sys 0m0.013s 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.723 09:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.723 ************************************ 00:07:07.723 END TEST rpc_trace_cmd_test 00:07:07.723 ************************************ 00:07:07.723 09:28:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:07.723 09:28:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:07.723 09:28:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:07.723 09:28:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.723 09:28:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.723 09:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.723 ************************************ 00:07:07.723 START TEST rpc_daemon_integrity 00:07:07.723 ************************************ 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.723 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.982 { 00:07:07.982 "name": "Malloc2", 00:07:07.982 "aliases": [ 00:07:07.982 "615e979e-2893-4667-81db-f463fd5d90fa" 00:07:07.982 ], 00:07:07.982 "product_name": "Malloc disk", 00:07:07.982 "block_size": 512, 00:07:07.982 "num_blocks": 16384, 00:07:07.982 "uuid": "615e979e-2893-4667-81db-f463fd5d90fa", 00:07:07.982 "assigned_rate_limits": { 00:07:07.982 "rw_ios_per_sec": 0, 00:07:07.982 "rw_mbytes_per_sec": 0, 00:07:07.982 "r_mbytes_per_sec": 0, 00:07:07.982 "w_mbytes_per_sec": 0 00:07:07.982 }, 00:07:07.982 "claimed": false, 00:07:07.982 "zoned": false, 00:07:07.982 "supported_io_types": { 00:07:07.982 "read": true, 00:07:07.982 "write": true, 00:07:07.982 "unmap": true, 00:07:07.982 "flush": true, 00:07:07.982 "reset": true, 00:07:07.982 "nvme_admin": false, 00:07:07.982 "nvme_io": false, 00:07:07.982 "nvme_io_md": false, 00:07:07.982 "write_zeroes": true, 00:07:07.982 "zcopy": true, 00:07:07.982 "get_zone_info": false, 00:07:07.982 "zone_management": false, 00:07:07.982 "zone_append": false, 00:07:07.982 "compare": false, 00:07:07.982 "compare_and_write": false, 00:07:07.982 "abort": true, 00:07:07.982 "seek_hole": false, 00:07:07.982 "seek_data": false, 00:07:07.982 "copy": true, 00:07:07.982 "nvme_iov_md": false 00:07:07.982 }, 00:07:07.982 "memory_domains": [ 00:07:07.982 { 00:07:07.982 "dma_device_id": "system", 00:07:07.982 "dma_device_type": 1 00:07:07.982 }, 00:07:07.982 { 00:07:07.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.982 "dma_device_type": 2 00:07:07.982 } 00:07:07.982 ], 00:07:07.982 "driver_specific": {} 00:07:07.982 } 00:07:07.982 ]' 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 [2024-10-07 09:28:02.600579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:07.982 [2024-10-07 09:28:02.600627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.982 [2024-10-07 09:28:02.600651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f17c0 00:07:07.982 [2024-10-07 09:28:02.600668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.982 [2024-10-07 09:28:02.602046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.982 [2024-10-07 09:28:02.602074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.982 Passthru0 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.982 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.982 { 00:07:07.982 "name": "Malloc2", 00:07:07.982 "aliases": [ 00:07:07.982 "615e979e-2893-4667-81db-f463fd5d90fa" 00:07:07.982 ], 00:07:07.982 "product_name": "Malloc disk", 00:07:07.982 "block_size": 512, 00:07:07.982 "num_blocks": 16384, 00:07:07.982 "uuid": "615e979e-2893-4667-81db-f463fd5d90fa", 00:07:07.982 "assigned_rate_limits": { 00:07:07.982 "rw_ios_per_sec": 0, 00:07:07.982 "rw_mbytes_per_sec": 0, 00:07:07.982 "r_mbytes_per_sec": 0, 00:07:07.982 "w_mbytes_per_sec": 0 00:07:07.982 }, 00:07:07.982 "claimed": true, 00:07:07.982 "claim_type": "exclusive_write", 00:07:07.982 "zoned": false, 00:07:07.983 "supported_io_types": { 00:07:07.983 "read": true, 00:07:07.983 "write": true, 00:07:07.983 "unmap": true, 00:07:07.983 "flush": true, 00:07:07.983 "reset": true, 00:07:07.983 "nvme_admin": false, 00:07:07.983 "nvme_io": false, 00:07:07.983 "nvme_io_md": false, 00:07:07.983 "write_zeroes": true, 00:07:07.983 "zcopy": true, 00:07:07.983 "get_zone_info": false, 00:07:07.983 "zone_management": false, 00:07:07.983 "zone_append": false, 00:07:07.983 "compare": false, 00:07:07.983 "compare_and_write": false, 00:07:07.983 "abort": true, 00:07:07.983 "seek_hole": false, 00:07:07.983 "seek_data": false, 00:07:07.983 "copy": true, 00:07:07.983 "nvme_iov_md": false 00:07:07.983 }, 00:07:07.983 "memory_domains": [ 00:07:07.983 { 00:07:07.983 "dma_device_id": "system", 00:07:07.983 "dma_device_type": 1 00:07:07.983 }, 00:07:07.983 { 00:07:07.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.983 "dma_device_type": 2 00:07:07.983 } 00:07:07.983 ], 00:07:07.983 "driver_specific": {} 00:07:07.983 }, 00:07:07.983 { 00:07:07.983 "name": "Passthru0", 00:07:07.983 "aliases": [ 00:07:07.983 "5b8b1335-ccfc-5260-a75f-69d3b564f3dd" 00:07:07.983 ], 00:07:07.983 "product_name": "passthru", 00:07:07.983 "block_size": 512, 00:07:07.983 "num_blocks": 16384, 00:07:07.983 "uuid": "5b8b1335-ccfc-5260-a75f-69d3b564f3dd", 00:07:07.983 "assigned_rate_limits": { 00:07:07.983 "rw_ios_per_sec": 0, 00:07:07.983 "rw_mbytes_per_sec": 0, 00:07:07.983 "r_mbytes_per_sec": 0, 00:07:07.983 "w_mbytes_per_sec": 0 00:07:07.983 }, 00:07:07.983 "claimed": false, 00:07:07.983 "zoned": false, 00:07:07.983 "supported_io_types": { 00:07:07.983 "read": true, 00:07:07.983 "write": true, 00:07:07.983 "unmap": true, 00:07:07.983 "flush": true, 00:07:07.983 "reset": true, 00:07:07.983 "nvme_admin": false, 00:07:07.983 "nvme_io": false, 00:07:07.983 "nvme_io_md": false, 00:07:07.983 "write_zeroes": true, 00:07:07.983 "zcopy": true, 00:07:07.983 "get_zone_info": false, 00:07:07.983 "zone_management": false, 00:07:07.983 "zone_append": false, 00:07:07.983 "compare": false, 00:07:07.983 "compare_and_write": false, 00:07:07.983 "abort": true, 00:07:07.983 "seek_hole": false, 00:07:07.983 "seek_data": false, 00:07:07.983 "copy": true, 00:07:07.983 "nvme_iov_md": false 00:07:07.983 }, 00:07:07.983 "memory_domains": [ 00:07:07.983 { 00:07:07.983 "dma_device_id": "system", 00:07:07.983 "dma_device_type": 1 00:07:07.983 }, 00:07:07.983 { 00:07:07.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.983 "dma_device_type": 2 00:07:07.983 } 00:07:07.983 ], 00:07:07.983 "driver_specific": { 00:07:07.983 "passthru": { 00:07:07.983 "name": "Passthru0", 00:07:07.983 "base_bdev_name": "Malloc2" 00:07:07.983 } 00:07:07.983 } 00:07:07.983 } 00:07:07.983 ]' 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.983 00:07:07.983 real 0m0.239s 00:07:07.983 user 0m0.161s 00:07:07.983 sys 0m0.022s 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.983 09:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.983 ************************************ 00:07:07.983 END TEST rpc_daemon_integrity 00:07:07.983 ************************************ 00:07:07.983 09:28:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:07.983 09:28:02 rpc -- rpc/rpc.sh@84 -- # killprocess 1405673 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@950 -- # '[' -z 1405673 ']' 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@954 -- # kill -0 1405673 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@955 -- # uname 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405673 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405673' 00:07:07.983 killing process with pid 1405673 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@969 -- # kill 1405673 00:07:07.983 09:28:02 rpc -- common/autotest_common.sh@974 -- # wait 1405673 00:07:08.549 00:07:08.549 real 0m2.483s 00:07:08.549 user 0m3.124s 00:07:08.549 sys 0m0.799s 00:07:08.549 09:28:03 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.549 09:28:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.549 ************************************ 00:07:08.549 END TEST rpc 00:07:08.549 ************************************ 00:07:08.549 09:28:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.549 09:28:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.549 09:28:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.549 09:28:03 -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 ************************************ 00:07:08.807 START TEST skip_rpc 00:07:08.807 ************************************ 00:07:08.807 09:28:03 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.807 * Looking for test storage... 00:07:08.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:08.807 09:28:03 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.807 09:28:03 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.807 09:28:03 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.807 09:28:03 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:08.807 09:28:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.065 09:28:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.065 --rc genhtml_branch_coverage=1 00:07:09.065 --rc genhtml_function_coverage=1 00:07:09.065 --rc genhtml_legend=1 00:07:09.065 --rc geninfo_all_blocks=1 00:07:09.065 --rc geninfo_unexecuted_blocks=1 00:07:09.065 00:07:09.065 ' 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.065 --rc genhtml_branch_coverage=1 00:07:09.065 --rc genhtml_function_coverage=1 00:07:09.065 --rc genhtml_legend=1 00:07:09.065 --rc geninfo_all_blocks=1 00:07:09.065 --rc geninfo_unexecuted_blocks=1 00:07:09.065 00:07:09.065 ' 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.065 --rc genhtml_branch_coverage=1 00:07:09.065 --rc genhtml_function_coverage=1 00:07:09.065 --rc genhtml_legend=1 00:07:09.065 --rc geninfo_all_blocks=1 00:07:09.065 --rc geninfo_unexecuted_blocks=1 00:07:09.065 00:07:09.065 ' 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.065 --rc genhtml_branch_coverage=1 00:07:09.065 --rc genhtml_function_coverage=1 00:07:09.065 --rc genhtml_legend=1 00:07:09.065 --rc geninfo_all_blocks=1 00:07:09.065 --rc geninfo_unexecuted_blocks=1 00:07:09.065 00:07:09.065 ' 00:07:09.065 09:28:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:09.065 09:28:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:09.065 09:28:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.065 09:28:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.066 09:28:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.066 ************************************ 00:07:09.066 START TEST skip_rpc 00:07:09.066 ************************************ 00:07:09.066 09:28:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:09.066 09:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1406186 00:07:09.066 09:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:09.066 09:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.066 09:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:09.066 [2024-10-07 09:28:03.775771] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:09.066 [2024-10-07 09:28:03.775975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406186 ] 00:07:09.066 [2024-10-07 09:28:03.871217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.324 [2024-10-07 09:28:03.995928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1406186 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1406186 ']' 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1406186 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406186 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406186' 00:07:14.585 killing process with pid 1406186 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1406186 00:07:14.585 09:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1406186 00:07:14.585 00:07:14.585 real 0m5.595s 00:07:14.585 user 0m5.223s 00:07:14.585 sys 0m0.418s 00:07:14.585 09:28:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.585 09:28:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.585 ************************************ 00:07:14.585 END TEST skip_rpc 00:07:14.585 ************************************ 00:07:14.585 09:28:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:14.585 09:28:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.585 09:28:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.585 09:28:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.585 ************************************ 00:07:14.585 START TEST skip_rpc_with_json 00:07:14.585 ************************************ 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1406813 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1406813 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1406813 ']' 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.585 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.585 [2024-10-07 09:28:09.379208] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:14.585 [2024-10-07 09:28:09.379307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406813 ] 00:07:14.844 [2024-10-07 09:28:09.445014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.844 [2024-10-07 09:28:09.570949] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.103 [2024-10-07 09:28:09.867263] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:15.103 request: 00:07:15.103 { 00:07:15.103 "trtype": "tcp", 00:07:15.103 "method": "nvmf_get_transports", 00:07:15.103 "req_id": 1 00:07:15.103 } 00:07:15.103 Got JSON-RPC error response 00:07:15.103 response: 00:07:15.103 { 00:07:15.103 "code": -19, 00:07:15.103 "message": "No such device" 00:07:15.103 } 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.103 [2024-10-07 09:28:09.875395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.103 09:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.362 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.362 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:15.362 { 00:07:15.362 "subsystems": [ 00:07:15.362 { 00:07:15.362 "subsystem": "fsdev", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "fsdev_set_opts", 00:07:15.362 "params": { 00:07:15.362 "fsdev_io_pool_size": 65535, 00:07:15.362 "fsdev_io_cache_size": 256 00:07:15.362 } 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "vfio_user_target", 00:07:15.362 "config": null 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "keyring", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "iobuf", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "iobuf_set_options", 00:07:15.362 "params": { 00:07:15.362 "small_pool_count": 8192, 00:07:15.362 "large_pool_count": 1024, 00:07:15.362 "small_bufsize": 8192, 00:07:15.362 "large_bufsize": 135168 00:07:15.362 } 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "sock", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "sock_set_default_impl", 00:07:15.362 "params": { 00:07:15.362 "impl_name": "posix" 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "sock_impl_set_options", 00:07:15.362 "params": { 00:07:15.362 "impl_name": "ssl", 00:07:15.362 "recv_buf_size": 4096, 00:07:15.362 "send_buf_size": 4096, 00:07:15.362 "enable_recv_pipe": true, 00:07:15.362 "enable_quickack": false, 00:07:15.362 "enable_placement_id": 0, 00:07:15.362 "enable_zerocopy_send_server": true, 00:07:15.362 "enable_zerocopy_send_client": false, 00:07:15.362 "zerocopy_threshold": 0, 00:07:15.362 "tls_version": 0, 00:07:15.362 "enable_ktls": false 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "sock_impl_set_options", 00:07:15.362 "params": { 00:07:15.362 "impl_name": "posix", 00:07:15.362 "recv_buf_size": 2097152, 00:07:15.362 "send_buf_size": 2097152, 00:07:15.362 "enable_recv_pipe": true, 00:07:15.362 "enable_quickack": false, 00:07:15.362 "enable_placement_id": 0, 00:07:15.362 "enable_zerocopy_send_server": true, 00:07:15.362 "enable_zerocopy_send_client": false, 00:07:15.362 "zerocopy_threshold": 0, 00:07:15.362 "tls_version": 0, 00:07:15.362 "enable_ktls": false 00:07:15.362 } 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "vmd", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "accel", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "accel_set_options", 00:07:15.362 "params": { 00:07:15.362 "small_cache_size": 128, 00:07:15.362 "large_cache_size": 16, 00:07:15.362 "task_count": 2048, 00:07:15.362 "sequence_count": 2048, 00:07:15.362 "buf_count": 2048 00:07:15.362 } 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "bdev", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "bdev_set_options", 00:07:15.362 "params": { 00:07:15.362 "bdev_io_pool_size": 65535, 00:07:15.362 "bdev_io_cache_size": 256, 00:07:15.362 "bdev_auto_examine": true, 00:07:15.362 "iobuf_small_cache_size": 128, 00:07:15.362 "iobuf_large_cache_size": 16 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "bdev_raid_set_options", 00:07:15.362 "params": { 00:07:15.362 "process_window_size_kb": 1024, 00:07:15.362 "process_max_bandwidth_mb_sec": 0 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "bdev_iscsi_set_options", 00:07:15.362 "params": { 00:07:15.362 "timeout_sec": 30 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "bdev_nvme_set_options", 00:07:15.362 "params": { 00:07:15.362 "action_on_timeout": "none", 00:07:15.362 "timeout_us": 0, 00:07:15.362 "timeout_admin_us": 0, 00:07:15.362 "keep_alive_timeout_ms": 10000, 00:07:15.362 "arbitration_burst": 0, 00:07:15.362 "low_priority_weight": 0, 00:07:15.362 "medium_priority_weight": 0, 00:07:15.362 "high_priority_weight": 0, 00:07:15.362 "nvme_adminq_poll_period_us": 10000, 00:07:15.362 "nvme_ioq_poll_period_us": 0, 00:07:15.362 "io_queue_requests": 0, 00:07:15.362 "delay_cmd_submit": true, 00:07:15.362 "transport_retry_count": 4, 00:07:15.362 "bdev_retry_count": 3, 00:07:15.362 "transport_ack_timeout": 0, 00:07:15.362 "ctrlr_loss_timeout_sec": 0, 00:07:15.362 "reconnect_delay_sec": 0, 00:07:15.362 "fast_io_fail_timeout_sec": 0, 00:07:15.362 "disable_auto_failback": false, 00:07:15.362 "generate_uuids": false, 00:07:15.362 "transport_tos": 0, 00:07:15.362 "nvme_error_stat": false, 00:07:15.362 "rdma_srq_size": 0, 00:07:15.362 "io_path_stat": false, 00:07:15.362 "allow_accel_sequence": false, 00:07:15.362 "rdma_max_cq_size": 0, 00:07:15.362 "rdma_cm_event_timeout_ms": 0, 00:07:15.362 "dhchap_digests": [ 00:07:15.362 "sha256", 00:07:15.362 "sha384", 00:07:15.362 "sha512" 00:07:15.362 ], 00:07:15.362 "dhchap_dhgroups": [ 00:07:15.362 "null", 00:07:15.362 "ffdhe2048", 00:07:15.362 "ffdhe3072", 00:07:15.362 "ffdhe4096", 00:07:15.362 "ffdhe6144", 00:07:15.362 "ffdhe8192" 00:07:15.362 ] 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "bdev_nvme_set_hotplug", 00:07:15.362 "params": { 00:07:15.362 "period_us": 100000, 00:07:15.362 "enable": false 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "method": "bdev_wait_for_examine" 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "scsi", 00:07:15.362 "config": null 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "scheduler", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "framework_set_scheduler", 00:07:15.362 "params": { 00:07:15.362 "name": "static" 00:07:15.362 } 00:07:15.362 } 00:07:15.362 ] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "vhost_scsi", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "vhost_blk", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "ublk", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "nbd", 00:07:15.362 "config": [] 00:07:15.362 }, 00:07:15.362 { 00:07:15.362 "subsystem": "nvmf", 00:07:15.362 "config": [ 00:07:15.362 { 00:07:15.362 "method": "nvmf_set_config", 00:07:15.362 "params": { 00:07:15.362 "discovery_filter": "match_any", 00:07:15.362 "admin_cmd_passthru": { 00:07:15.362 "identify_ctrlr": false 00:07:15.362 }, 00:07:15.362 "dhchap_digests": [ 00:07:15.362 "sha256", 00:07:15.362 "sha384", 00:07:15.362 "sha512" 00:07:15.362 ], 00:07:15.362 "dhchap_dhgroups": [ 00:07:15.362 "null", 00:07:15.362 "ffdhe2048", 00:07:15.362 "ffdhe3072", 00:07:15.362 "ffdhe4096", 00:07:15.362 "ffdhe6144", 00:07:15.362 "ffdhe8192" 00:07:15.362 ] 00:07:15.362 } 00:07:15.362 }, 00:07:15.362 { 00:07:15.363 "method": "nvmf_set_max_subsystems", 00:07:15.363 "params": { 00:07:15.363 "max_subsystems": 1024 00:07:15.363 } 00:07:15.363 }, 00:07:15.363 { 00:07:15.363 "method": "nvmf_set_crdt", 00:07:15.363 "params": { 00:07:15.363 "crdt1": 0, 00:07:15.363 "crdt2": 0, 00:07:15.363 "crdt3": 0 00:07:15.363 } 00:07:15.363 }, 00:07:15.363 { 00:07:15.363 "method": "nvmf_create_transport", 00:07:15.363 "params": { 00:07:15.363 "trtype": "TCP", 00:07:15.363 "max_queue_depth": 128, 00:07:15.363 "max_io_qpairs_per_ctrlr": 127, 00:07:15.363 "in_capsule_data_size": 4096, 00:07:15.363 "max_io_size": 131072, 00:07:15.363 "io_unit_size": 131072, 00:07:15.363 "max_aq_depth": 128, 00:07:15.363 "num_shared_buffers": 511, 00:07:15.363 "buf_cache_size": 4294967295, 00:07:15.363 "dif_insert_or_strip": false, 00:07:15.363 "zcopy": false, 00:07:15.363 "c2h_success": true, 00:07:15.363 "sock_priority": 0, 00:07:15.363 "abort_timeout_sec": 1, 00:07:15.363 "ack_timeout": 0, 00:07:15.363 "data_wr_pool_size": 0 00:07:15.363 } 00:07:15.363 } 00:07:15.363 ] 00:07:15.363 }, 00:07:15.363 { 00:07:15.363 "subsystem": "iscsi", 00:07:15.363 "config": [ 00:07:15.363 { 00:07:15.363 "method": "iscsi_set_options", 00:07:15.363 "params": { 00:07:15.363 "node_base": "iqn.2016-06.io.spdk", 00:07:15.363 "max_sessions": 128, 00:07:15.363 "max_connections_per_session": 2, 00:07:15.363 "max_queue_depth": 64, 00:07:15.363 "default_time2wait": 2, 00:07:15.363 "default_time2retain": 20, 00:07:15.363 "first_burst_length": 8192, 00:07:15.363 "immediate_data": true, 00:07:15.363 "allow_duplicated_isid": false, 00:07:15.363 "error_recovery_level": 0, 00:07:15.363 "nop_timeout": 60, 00:07:15.363 "nop_in_interval": 30, 00:07:15.363 "disable_chap": false, 00:07:15.363 "require_chap": false, 00:07:15.363 "mutual_chap": false, 00:07:15.363 "chap_group": 0, 00:07:15.363 "max_large_datain_per_connection": 64, 00:07:15.363 "max_r2t_per_connection": 4, 00:07:15.363 "pdu_pool_size": 36864, 00:07:15.363 "immediate_data_pool_size": 16384, 00:07:15.363 "data_out_pool_size": 2048 00:07:15.363 } 00:07:15.363 } 00:07:15.363 ] 00:07:15.363 } 00:07:15.363 ] 00:07:15.363 } 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1406813 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1406813 ']' 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1406813 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406813 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406813' 00:07:15.363 killing process with pid 1406813 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1406813 00:07:15.363 09:28:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1406813 00:07:15.929 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1406968 00:07:15.929 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:15.929 09:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1406968 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1406968 ']' 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1406968 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406968 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406968' 00:07:21.188 killing process with pid 1406968 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1406968 00:07:21.188 09:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1406968 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:21.447 00:07:21.447 real 0m6.860s 00:07:21.447 user 0m6.462s 00:07:21.447 sys 0m0.796s 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.447 ************************************ 00:07:21.447 END TEST skip_rpc_with_json 00:07:21.447 ************************************ 00:07:21.447 09:28:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:21.447 09:28:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.447 09:28:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.447 09:28:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.447 ************************************ 00:07:21.447 START TEST skip_rpc_with_delay 00:07:21.447 ************************************ 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:21.447 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.706 [2024-10-07 09:28:16.367158] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:21.706 [2024-10-07 09:28:16.367441] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.706 00:07:21.706 real 0m0.148s 00:07:21.706 user 0m0.101s 00:07:21.706 sys 0m0.045s 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.706 09:28:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:21.706 ************************************ 00:07:21.706 END TEST skip_rpc_with_delay 00:07:21.706 ************************************ 00:07:21.706 09:28:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:21.706 09:28:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:21.706 09:28:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:21.706 09:28:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.706 09:28:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.706 09:28:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.706 ************************************ 00:07:21.706 START TEST exit_on_failed_rpc_init 00:07:21.706 ************************************ 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1407671 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1407671 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1407671 ']' 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.706 09:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.966 [2024-10-07 09:28:16.563259] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:21.966 [2024-10-07 09:28:16.563433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407671 ] 00:07:21.966 [2024-10-07 09:28:16.658493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.224 [2024-10-07 09:28:16.784368] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:22.490 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.490 [2024-10-07 09:28:17.142874] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:22.490 [2024-10-07 09:28:17.142972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407798 ] 00:07:22.490 [2024-10-07 09:28:17.210309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.750 [2024-10-07 09:28:17.333923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.750 [2024-10-07 09:28:17.334045] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:22.750 [2024-10-07 09:28:17.334068] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:22.750 [2024-10-07 09:28:17.334082] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1407671 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1407671 ']' 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1407671 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1407671 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1407671' 00:07:22.750 killing process with pid 1407671 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1407671 00:07:22.750 09:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1407671 00:07:23.316 00:07:23.316 real 0m1.557s 00:07:23.316 user 0m1.810s 00:07:23.316 sys 0m0.548s 00:07:23.316 09:28:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.316 09:28:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:23.316 ************************************ 00:07:23.316 END TEST exit_on_failed_rpc_init 00:07:23.316 ************************************ 00:07:23.316 09:28:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:23.316 00:07:23.316 real 0m14.668s 00:07:23.316 user 0m13.884s 00:07:23.316 sys 0m2.050s 00:07:23.316 09:28:18 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.316 09:28:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.316 ************************************ 00:07:23.316 END TEST skip_rpc 00:07:23.316 ************************************ 00:07:23.316 09:28:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:23.316 09:28:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.316 09:28:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.316 09:28:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.316 ************************************ 00:07:23.316 START TEST rpc_client 00:07:23.316 ************************************ 00:07:23.316 09:28:18 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:23.575 * Looking for test storage... 00:07:23.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.575 09:28:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.575 09:28:18 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.575 --rc genhtml_branch_coverage=1 00:07:23.575 --rc genhtml_function_coverage=1 00:07:23.575 --rc genhtml_legend=1 00:07:23.576 --rc geninfo_all_blocks=1 00:07:23.576 --rc geninfo_unexecuted_blocks=1 00:07:23.576 00:07:23.576 ' 00:07:23.576 09:28:18 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.576 --rc genhtml_branch_coverage=1 00:07:23.576 --rc genhtml_function_coverage=1 00:07:23.576 --rc genhtml_legend=1 00:07:23.576 --rc geninfo_all_blocks=1 00:07:23.576 --rc geninfo_unexecuted_blocks=1 00:07:23.576 00:07:23.576 ' 00:07:23.576 09:28:18 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.576 --rc genhtml_branch_coverage=1 00:07:23.576 --rc genhtml_function_coverage=1 00:07:23.576 --rc genhtml_legend=1 00:07:23.576 --rc geninfo_all_blocks=1 00:07:23.576 --rc geninfo_unexecuted_blocks=1 00:07:23.576 00:07:23.576 ' 00:07:23.576 09:28:18 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.576 --rc genhtml_branch_coverage=1 00:07:23.576 --rc genhtml_function_coverage=1 00:07:23.576 --rc genhtml_legend=1 00:07:23.576 --rc geninfo_all_blocks=1 00:07:23.576 --rc geninfo_unexecuted_blocks=1 00:07:23.576 00:07:23.576 ' 00:07:23.576 09:28:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:23.576 OK 00:07:23.576 09:28:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:23.576 00:07:23.576 real 0m0.282s 00:07:23.576 user 0m0.202s 00:07:23.576 sys 0m0.090s 00:07:23.576 09:28:18 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.576 09:28:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:23.576 ************************************ 00:07:23.576 END TEST rpc_client 00:07:23.576 ************************************ 00:07:23.834 09:28:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:23.834 09:28:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.834 09:28:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.834 09:28:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.834 ************************************ 00:07:23.834 START TEST json_config 00:07:23.834 ************************************ 00:07:23.834 09:28:18 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:23.834 09:28:18 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.834 09:28:18 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.834 09:28:18 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.834 09:28:18 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.834 09:28:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.834 09:28:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.834 09:28:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.834 09:28:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.093 09:28:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.093 09:28:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.093 09:28:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.093 09:28:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.093 09:28:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.093 09:28:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.093 09:28:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.093 09:28:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:24.094 09:28:18 json_config -- scripts/common.sh@345 -- # : 1 00:07:24.094 09:28:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.094 09:28:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.094 09:28:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:24.094 09:28:18 json_config -- scripts/common.sh@353 -- # local d=1 00:07:24.094 09:28:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.094 09:28:18 json_config -- scripts/common.sh@355 -- # echo 1 00:07:24.094 09:28:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.094 09:28:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:24.094 09:28:18 json_config -- scripts/common.sh@353 -- # local d=2 00:07:24.094 09:28:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.094 09:28:18 json_config -- scripts/common.sh@355 -- # echo 2 00:07:24.094 09:28:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.094 09:28:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.094 09:28:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.094 09:28:18 json_config -- scripts/common.sh@368 -- # return 0 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.094 --rc genhtml_branch_coverage=1 00:07:24.094 --rc genhtml_function_coverage=1 00:07:24.094 --rc genhtml_legend=1 00:07:24.094 --rc geninfo_all_blocks=1 00:07:24.094 --rc geninfo_unexecuted_blocks=1 00:07:24.094 00:07:24.094 ' 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.094 --rc genhtml_branch_coverage=1 00:07:24.094 --rc genhtml_function_coverage=1 00:07:24.094 --rc genhtml_legend=1 00:07:24.094 --rc geninfo_all_blocks=1 00:07:24.094 --rc geninfo_unexecuted_blocks=1 00:07:24.094 00:07:24.094 ' 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.094 --rc genhtml_branch_coverage=1 00:07:24.094 --rc genhtml_function_coverage=1 00:07:24.094 --rc genhtml_legend=1 00:07:24.094 --rc geninfo_all_blocks=1 00:07:24.094 --rc geninfo_unexecuted_blocks=1 00:07:24.094 00:07:24.094 ' 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.094 --rc genhtml_branch_coverage=1 00:07:24.094 --rc genhtml_function_coverage=1 00:07:24.094 --rc genhtml_legend=1 00:07:24.094 --rc geninfo_all_blocks=1 00:07:24.094 --rc geninfo_unexecuted_blocks=1 00:07:24.094 00:07:24.094 ' 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.094 09:28:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.094 09:28:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.094 09:28:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.094 09:28:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.094 09:28:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.094 09:28:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.094 09:28:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.094 09:28:18 json_config -- paths/export.sh@5 -- # export PATH 00:07:24.094 09:28:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@51 -- # : 0 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.094 09:28:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:24.094 INFO: JSON configuration test init 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.094 09:28:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:24.094 09:28:18 json_config -- json_config/common.sh@9 -- # local app=target 00:07:24.094 09:28:18 json_config -- json_config/common.sh@10 -- # shift 00:07:24.094 09:28:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:24.094 09:28:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:24.094 09:28:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:24.094 09:28:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:24.094 09:28:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:24.094 09:28:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1408073 00:07:24.094 09:28:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:24.094 09:28:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:24.094 Waiting for target to run... 00:07:24.094 09:28:18 json_config -- json_config/common.sh@25 -- # waitforlisten 1408073 /var/tmp/spdk_tgt.sock 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@831 -- # '[' -z 1408073 ']' 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:24.094 09:28:18 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.095 09:28:18 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:24.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:24.095 09:28:18 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.095 09:28:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.095 [2024-10-07 09:28:18.782849] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:24.095 [2024-10-07 09:28:18.782992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408073 ] 00:07:24.662 [2024-10-07 09:28:19.441338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.920 [2024-10-07 09:28:19.549716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:25.177 09:28:19 json_config -- json_config/common.sh@26 -- # echo '' 00:07:25.177 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.177 09:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:25.177 09:28:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:25.178 09:28:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:28.459 09:28:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.459 09:28:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:28.459 09:28:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:28.459 09:28:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@54 -- # sort 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:29.025 09:28:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:29.025 09:28:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.025 09:28:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:29.282 09:28:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.282 09:28:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.282 09:28:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:29.283 09:28:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:29.283 09:28:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:29.283 09:28:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:29.283 09:28:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:29.848 MallocForNvmf0 00:07:29.848 09:28:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:29.848 09:28:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:30.414 MallocForNvmf1 00:07:30.415 09:28:25 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:30.415 09:28:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:30.979 [2024-10-07 09:28:25.706902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.979 09:28:25 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.979 09:28:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:31.237 09:28:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:31.237 09:28:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:32.169 09:28:26 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:32.169 09:28:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:32.169 09:28:26 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:32.170 09:28:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:32.735 [2024-10-07 09:28:27.275889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:32.735 09:28:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:32.735 09:28:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.735 09:28:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 09:28:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:32.735 09:28:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.735 09:28:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 09:28:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:32.735 09:28:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:32.735 09:28:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:33.301 MallocBdevForConfigChangeCheck 00:07:33.301 09:28:27 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:33.301 09:28:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.301 09:28:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.301 09:28:27 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:33.302 09:28:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.865 09:28:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:33.865 INFO: shutting down applications... 00:07:33.865 09:28:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:33.865 09:28:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:33.865 09:28:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:33.865 09:28:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:35.763 Calling clear_iscsi_subsystem 00:07:35.763 Calling clear_nvmf_subsystem 00:07:35.763 Calling clear_nbd_subsystem 00:07:35.763 Calling clear_ublk_subsystem 00:07:35.763 Calling clear_vhost_blk_subsystem 00:07:35.763 Calling clear_vhost_scsi_subsystem 00:07:35.763 Calling clear_bdev_subsystem 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:35.763 09:28:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:36.021 09:28:30 json_config -- json_config/json_config.sh@352 -- # break 00:07:36.021 09:28:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:36.021 09:28:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:36.021 09:28:30 json_config -- json_config/common.sh@31 -- # local app=target 00:07:36.021 09:28:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:36.021 09:28:30 json_config -- json_config/common.sh@35 -- # [[ -n 1408073 ]] 00:07:36.021 09:28:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1408073 00:07:36.021 09:28:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:36.021 09:28:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:36.021 09:28:30 json_config -- json_config/common.sh@41 -- # kill -0 1408073 00:07:36.021 09:28:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:36.594 09:28:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:36.594 09:28:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:36.594 09:28:31 json_config -- json_config/common.sh@41 -- # kill -0 1408073 00:07:36.594 09:28:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:36.594 09:28:31 json_config -- json_config/common.sh@43 -- # break 00:07:36.594 09:28:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:36.594 09:28:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:36.594 SPDK target shutdown done 00:07:36.594 09:28:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:36.594 INFO: relaunching applications... 00:07:36.594 09:28:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:36.594 09:28:31 json_config -- json_config/common.sh@9 -- # local app=target 00:07:36.594 09:28:31 json_config -- json_config/common.sh@10 -- # shift 00:07:36.594 09:28:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:36.594 09:28:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:36.594 09:28:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:36.594 09:28:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.594 09:28:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.594 09:28:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1409652 00:07:36.594 09:28:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:36.594 09:28:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:36.594 Waiting for target to run... 00:07:36.594 09:28:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1409652 /var/tmp/spdk_tgt.sock 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 1409652 ']' 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:36.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.594 09:28:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.594 [2024-10-07 09:28:31.315744] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:36.594 [2024-10-07 09:28:31.315972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409652 ] 00:07:37.531 [2024-10-07 09:28:31.991406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.531 [2024-10-07 09:28:32.099443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.917 [2024-10-07 09:28:35.175782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.917 [2024-10-07 09:28:35.208356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:40.917 09:28:35 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.917 09:28:35 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:40.917 09:28:35 json_config -- json_config/common.sh@26 -- # echo '' 00:07:40.917 00:07:40.917 09:28:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:40.917 09:28:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:40.917 INFO: Checking if target configuration is the same... 00:07:40.917 09:28:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:40.917 09:28:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:40.917 09:28:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:40.917 + '[' 2 -ne 2 ']' 00:07:40.917 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:40.917 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:40.917 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.917 +++ basename /dev/fd/62 00:07:40.917 ++ mktemp /tmp/62.XXX 00:07:40.917 + tmp_file_1=/tmp/62.l0J 00:07:40.917 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:40.917 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:40.917 + tmp_file_2=/tmp/spdk_tgt_config.json.z71 00:07:40.917 + ret=0 00:07:40.917 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:41.174 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:41.174 + diff -u /tmp/62.l0J /tmp/spdk_tgt_config.json.z71 00:07:41.174 + echo 'INFO: JSON config files are the same' 00:07:41.174 INFO: JSON config files are the same 00:07:41.174 + rm /tmp/62.l0J /tmp/spdk_tgt_config.json.z71 00:07:41.174 + exit 0 00:07:41.174 09:28:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:41.174 09:28:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:41.174 INFO: changing configuration and checking if this can be detected... 00:07:41.174 09:28:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:41.174 09:28:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:41.740 09:28:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.740 09:28:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:41.740 09:28:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:41.740 + '[' 2 -ne 2 ']' 00:07:41.740 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:41.740 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:41.740 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.740 +++ basename /dev/fd/62 00:07:41.740 ++ mktemp /tmp/62.XXX 00:07:41.740 + tmp_file_1=/tmp/62.1F7 00:07:41.740 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.740 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:41.740 + tmp_file_2=/tmp/spdk_tgt_config.json.Fb8 00:07:41.740 + ret=0 00:07:41.740 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:42.019 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:42.019 + diff -u /tmp/62.1F7 /tmp/spdk_tgt_config.json.Fb8 00:07:42.019 + ret=1 00:07:42.019 + echo '=== Start of file: /tmp/62.1F7 ===' 00:07:42.019 + cat /tmp/62.1F7 00:07:42.019 + echo '=== End of file: /tmp/62.1F7 ===' 00:07:42.019 + echo '' 00:07:42.019 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fb8 ===' 00:07:42.019 + cat /tmp/spdk_tgt_config.json.Fb8 00:07:42.019 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fb8 ===' 00:07:42.019 + echo '' 00:07:42.019 + rm /tmp/62.1F7 /tmp/spdk_tgt_config.json.Fb8 00:07:42.019 + exit 1 00:07:42.019 09:28:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:42.019 INFO: configuration change detected. 00:07:42.019 09:28:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:42.019 09:28:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:42.019 09:28:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.019 09:28:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 1409652 ]] 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.278 09:28:36 json_config -- json_config/json_config.sh@330 -- # killprocess 1409652 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@950 -- # '[' -z 1409652 ']' 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@954 -- # kill -0 1409652 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@955 -- # uname 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409652 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409652' 00:07:42.278 killing process with pid 1409652 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@969 -- # kill 1409652 00:07:42.278 09:28:36 json_config -- common/autotest_common.sh@974 -- # wait 1409652 00:07:44.230 09:28:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:44.230 09:28:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:44.230 09:28:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.230 09:28:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.230 09:28:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:44.230 09:28:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:44.230 INFO: Success 00:07:44.230 00:07:44.230 real 0m20.173s 00:07:44.230 user 0m24.872s 00:07:44.230 sys 0m3.562s 00:07:44.230 09:28:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.230 09:28:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.230 ************************************ 00:07:44.230 END TEST json_config 00:07:44.230 ************************************ 00:07:44.230 09:28:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:44.230 09:28:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.230 09:28:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.230 09:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.230 ************************************ 00:07:44.230 START TEST json_config_extra_key 00:07:44.230 ************************************ 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.230 --rc genhtml_branch_coverage=1 00:07:44.230 --rc genhtml_function_coverage=1 00:07:44.230 --rc genhtml_legend=1 00:07:44.230 --rc geninfo_all_blocks=1 00:07:44.230 --rc geninfo_unexecuted_blocks=1 00:07:44.230 00:07:44.230 ' 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.230 --rc genhtml_branch_coverage=1 00:07:44.230 --rc genhtml_function_coverage=1 00:07:44.230 --rc genhtml_legend=1 00:07:44.230 --rc geninfo_all_blocks=1 00:07:44.230 --rc geninfo_unexecuted_blocks=1 00:07:44.230 00:07:44.230 ' 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.230 --rc genhtml_branch_coverage=1 00:07:44.230 --rc genhtml_function_coverage=1 00:07:44.230 --rc genhtml_legend=1 00:07:44.230 --rc geninfo_all_blocks=1 00:07:44.230 --rc geninfo_unexecuted_blocks=1 00:07:44.230 00:07:44.230 ' 00:07:44.230 09:28:38 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.230 --rc genhtml_branch_coverage=1 00:07:44.230 --rc genhtml_function_coverage=1 00:07:44.230 --rc genhtml_legend=1 00:07:44.230 --rc geninfo_all_blocks=1 00:07:44.230 --rc geninfo_unexecuted_blocks=1 00:07:44.230 00:07:44.230 ' 00:07:44.230 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.230 09:28:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.230 09:28:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.230 09:28:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.230 09:28:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.230 09:28:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.231 09:28:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:44.231 09:28:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.231 09:28:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:44.231 INFO: launching applications... 00:07:44.231 09:28:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1410727 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:44.231 Waiting for target to run... 00:07:44.231 09:28:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1410727 /var/tmp/spdk_tgt.sock 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1410727 ']' 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.231 09:28:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:44.488 [2024-10-07 09:28:39.066798] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:44.488 [2024-10-07 09:28:39.066989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410727 ] 00:07:45.052 [2024-10-07 09:28:39.613728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.052 [2024-10-07 09:28:39.707852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.616 09:28:40 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.616 09:28:40 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:45.616 00:07:45.616 09:28:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:45.616 INFO: shutting down applications... 00:07:45.616 09:28:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1410727 ]] 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1410727 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1410727 00:07:45.616 09:28:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:46.180 09:28:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:46.180 09:28:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:46.180 09:28:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1410727 00:07:46.180 09:28:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:46.744 09:28:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:46.744 09:28:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:46.744 09:28:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1410727 00:07:46.744 09:28:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:46.744 09:28:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:46.745 09:28:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:46.745 09:28:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:46.745 SPDK target shutdown done 00:07:46.745 09:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:46.745 Success 00:07:46.745 00:07:46.745 real 0m2.677s 00:07:46.745 user 0m2.361s 00:07:46.745 sys 0m0.691s 00:07:46.745 09:28:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.745 09:28:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 END TEST json_config_extra_key 00:07:46.745 ************************************ 00:07:46.745 09:28:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:46.745 09:28:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.745 09:28:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.745 09:28:41 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 START TEST alias_rpc 00:07:46.745 ************************************ 00:07:46.745 09:28:41 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:46.745 * Looking for test storage... 00:07:46.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:46.745 09:28:41 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:46.745 09:28:41 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:46.745 09:28:41 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.002 09:28:41 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.002 09:28:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:47.002 09:28:41 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.002 09:28:41 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.002 --rc genhtml_branch_coverage=1 00:07:47.002 --rc genhtml_function_coverage=1 00:07:47.002 --rc genhtml_legend=1 00:07:47.002 --rc geninfo_all_blocks=1 00:07:47.002 --rc geninfo_unexecuted_blocks=1 00:07:47.002 00:07:47.002 ' 00:07:47.002 09:28:41 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.002 --rc genhtml_branch_coverage=1 00:07:47.002 --rc genhtml_function_coverage=1 00:07:47.002 --rc genhtml_legend=1 00:07:47.003 --rc geninfo_all_blocks=1 00:07:47.003 --rc geninfo_unexecuted_blocks=1 00:07:47.003 00:07:47.003 ' 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.003 --rc genhtml_branch_coverage=1 00:07:47.003 --rc genhtml_function_coverage=1 00:07:47.003 --rc genhtml_legend=1 00:07:47.003 --rc geninfo_all_blocks=1 00:07:47.003 --rc geninfo_unexecuted_blocks=1 00:07:47.003 00:07:47.003 ' 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.003 --rc genhtml_branch_coverage=1 00:07:47.003 --rc genhtml_function_coverage=1 00:07:47.003 --rc genhtml_legend=1 00:07:47.003 --rc geninfo_all_blocks=1 00:07:47.003 --rc geninfo_unexecuted_blocks=1 00:07:47.003 00:07:47.003 ' 00:07:47.003 09:28:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.003 09:28:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1411050 00:07:47.003 09:28:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.003 09:28:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1411050 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1411050 ']' 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.003 09:28:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.260 [2024-10-07 09:28:41.825395] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:47.260 [2024-10-07 09:28:41.825588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411050 ] 00:07:47.260 [2024-10-07 09:28:41.923234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.260 [2024-10-07 09:28:42.047387] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.824 09:28:42 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.824 09:28:42 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.824 09:28:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:48.082 09:28:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1411050 00:07:48.082 09:28:42 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1411050 ']' 00:07:48.082 09:28:42 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1411050 00:07:48.082 09:28:42 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1411050 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1411050' 00:07:48.340 killing process with pid 1411050 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@969 -- # kill 1411050 00:07:48.340 09:28:42 alias_rpc -- common/autotest_common.sh@974 -- # wait 1411050 00:07:48.906 00:07:48.906 real 0m2.042s 00:07:48.906 user 0m2.564s 00:07:48.906 sys 0m0.621s 00:07:48.906 09:28:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.906 09:28:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.906 ************************************ 00:07:48.906 END TEST alias_rpc 00:07:48.906 ************************************ 00:07:48.906 09:28:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:48.906 09:28:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:48.906 09:28:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.906 09:28:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.906 09:28:43 -- common/autotest_common.sh@10 -- # set +x 00:07:48.906 ************************************ 00:07:48.906 START TEST spdkcli_tcp 00:07:48.906 ************************************ 00:07:48.906 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:48.906 * Looking for test storage... 00:07:48.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:48.906 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:48.906 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:48.906 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.164 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.164 09:28:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:49.164 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.164 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.164 --rc genhtml_branch_coverage=1 00:07:49.164 --rc genhtml_function_coverage=1 00:07:49.164 --rc genhtml_legend=1 00:07:49.164 --rc geninfo_all_blocks=1 00:07:49.164 --rc geninfo_unexecuted_blocks=1 00:07:49.164 00:07:49.164 ' 00:07:49.164 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.164 --rc genhtml_branch_coverage=1 00:07:49.164 --rc genhtml_function_coverage=1 00:07:49.164 --rc genhtml_legend=1 00:07:49.164 --rc geninfo_all_blocks=1 00:07:49.164 --rc geninfo_unexecuted_blocks=1 00:07:49.164 00:07:49.164 ' 00:07:49.164 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.164 --rc genhtml_branch_coverage=1 00:07:49.164 --rc genhtml_function_coverage=1 00:07:49.164 --rc genhtml_legend=1 00:07:49.164 --rc geninfo_all_blocks=1 00:07:49.164 --rc geninfo_unexecuted_blocks=1 00:07:49.164 00:07:49.164 ' 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.165 --rc genhtml_branch_coverage=1 00:07:49.165 --rc genhtml_function_coverage=1 00:07:49.165 --rc genhtml_legend=1 00:07:49.165 --rc geninfo_all_blocks=1 00:07:49.165 --rc geninfo_unexecuted_blocks=1 00:07:49.165 00:07:49.165 ' 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1411377 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:49.165 09:28:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1411377 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1411377 ']' 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.165 09:28:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.165 [2024-10-07 09:28:43.874311] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:49.165 [2024-10-07 09:28:43.874408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411377 ] 00:07:49.165 [2024-10-07 09:28:43.932494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.422 [2024-10-07 09:28:44.049851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.422 [2024-10-07 09:28:44.049855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.681 09:28:44 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.681 09:28:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:49.681 09:28:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1411509 00:07:49.681 09:28:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:49.681 09:28:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:50.246 [ 00:07:50.246 "bdev_malloc_delete", 00:07:50.246 "bdev_malloc_create", 00:07:50.246 "bdev_null_resize", 00:07:50.246 "bdev_null_delete", 00:07:50.246 "bdev_null_create", 00:07:50.246 "bdev_nvme_cuse_unregister", 00:07:50.246 "bdev_nvme_cuse_register", 00:07:50.246 "bdev_opal_new_user", 00:07:50.246 "bdev_opal_set_lock_state", 00:07:50.246 "bdev_opal_delete", 00:07:50.246 "bdev_opal_get_info", 00:07:50.246 "bdev_opal_create", 00:07:50.247 "bdev_nvme_opal_revert", 00:07:50.247 "bdev_nvme_opal_init", 00:07:50.247 "bdev_nvme_send_cmd", 00:07:50.247 "bdev_nvme_set_keys", 00:07:50.247 "bdev_nvme_get_path_iostat", 00:07:50.247 "bdev_nvme_get_mdns_discovery_info", 00:07:50.247 "bdev_nvme_stop_mdns_discovery", 00:07:50.247 "bdev_nvme_start_mdns_discovery", 00:07:50.247 "bdev_nvme_set_multipath_policy", 00:07:50.247 "bdev_nvme_set_preferred_path", 00:07:50.247 "bdev_nvme_get_io_paths", 00:07:50.247 "bdev_nvme_remove_error_injection", 00:07:50.247 "bdev_nvme_add_error_injection", 00:07:50.247 "bdev_nvme_get_discovery_info", 00:07:50.247 "bdev_nvme_stop_discovery", 00:07:50.247 "bdev_nvme_start_discovery", 00:07:50.247 "bdev_nvme_get_controller_health_info", 00:07:50.247 "bdev_nvme_disable_controller", 00:07:50.247 "bdev_nvme_enable_controller", 00:07:50.247 "bdev_nvme_reset_controller", 00:07:50.247 "bdev_nvme_get_transport_statistics", 00:07:50.247 "bdev_nvme_apply_firmware", 00:07:50.247 "bdev_nvme_detach_controller", 00:07:50.247 "bdev_nvme_get_controllers", 00:07:50.247 "bdev_nvme_attach_controller", 00:07:50.247 "bdev_nvme_set_hotplug", 00:07:50.247 "bdev_nvme_set_options", 00:07:50.247 "bdev_passthru_delete", 00:07:50.247 "bdev_passthru_create", 00:07:50.247 "bdev_lvol_set_parent_bdev", 00:07:50.247 "bdev_lvol_set_parent", 00:07:50.247 "bdev_lvol_check_shallow_copy", 00:07:50.247 "bdev_lvol_start_shallow_copy", 00:07:50.247 "bdev_lvol_grow_lvstore", 00:07:50.247 "bdev_lvol_get_lvols", 00:07:50.247 "bdev_lvol_get_lvstores", 00:07:50.247 "bdev_lvol_delete", 00:07:50.247 "bdev_lvol_set_read_only", 00:07:50.247 "bdev_lvol_resize", 00:07:50.247 "bdev_lvol_decouple_parent", 00:07:50.247 "bdev_lvol_inflate", 00:07:50.247 "bdev_lvol_rename", 00:07:50.247 "bdev_lvol_clone_bdev", 00:07:50.247 "bdev_lvol_clone", 00:07:50.247 "bdev_lvol_snapshot", 00:07:50.247 "bdev_lvol_create", 00:07:50.247 "bdev_lvol_delete_lvstore", 00:07:50.247 "bdev_lvol_rename_lvstore", 00:07:50.247 "bdev_lvol_create_lvstore", 00:07:50.247 "bdev_raid_set_options", 00:07:50.247 "bdev_raid_remove_base_bdev", 00:07:50.247 "bdev_raid_add_base_bdev", 00:07:50.247 "bdev_raid_delete", 00:07:50.247 "bdev_raid_create", 00:07:50.247 "bdev_raid_get_bdevs", 00:07:50.247 "bdev_error_inject_error", 00:07:50.247 "bdev_error_delete", 00:07:50.247 "bdev_error_create", 00:07:50.247 "bdev_split_delete", 00:07:50.247 "bdev_split_create", 00:07:50.247 "bdev_delay_delete", 00:07:50.247 "bdev_delay_create", 00:07:50.247 "bdev_delay_update_latency", 00:07:50.247 "bdev_zone_block_delete", 00:07:50.247 "bdev_zone_block_create", 00:07:50.247 "blobfs_create", 00:07:50.247 "blobfs_detect", 00:07:50.247 "blobfs_set_cache_size", 00:07:50.247 "bdev_aio_delete", 00:07:50.247 "bdev_aio_rescan", 00:07:50.247 "bdev_aio_create", 00:07:50.247 "bdev_ftl_set_property", 00:07:50.247 "bdev_ftl_get_properties", 00:07:50.247 "bdev_ftl_get_stats", 00:07:50.247 "bdev_ftl_unmap", 00:07:50.247 "bdev_ftl_unload", 00:07:50.247 "bdev_ftl_delete", 00:07:50.247 "bdev_ftl_load", 00:07:50.247 "bdev_ftl_create", 00:07:50.247 "bdev_virtio_attach_controller", 00:07:50.247 "bdev_virtio_scsi_get_devices", 00:07:50.247 "bdev_virtio_detach_controller", 00:07:50.247 "bdev_virtio_blk_set_hotplug", 00:07:50.247 "bdev_iscsi_delete", 00:07:50.247 "bdev_iscsi_create", 00:07:50.247 "bdev_iscsi_set_options", 00:07:50.247 "accel_error_inject_error", 00:07:50.247 "ioat_scan_accel_module", 00:07:50.247 "dsa_scan_accel_module", 00:07:50.247 "iaa_scan_accel_module", 00:07:50.247 "vfu_virtio_create_fs_endpoint", 00:07:50.247 "vfu_virtio_create_scsi_endpoint", 00:07:50.247 "vfu_virtio_scsi_remove_target", 00:07:50.247 "vfu_virtio_scsi_add_target", 00:07:50.247 "vfu_virtio_create_blk_endpoint", 00:07:50.247 "vfu_virtio_delete_endpoint", 00:07:50.247 "keyring_file_remove_key", 00:07:50.247 "keyring_file_add_key", 00:07:50.247 "keyring_linux_set_options", 00:07:50.247 "fsdev_aio_delete", 00:07:50.247 "fsdev_aio_create", 00:07:50.247 "iscsi_get_histogram", 00:07:50.247 "iscsi_enable_histogram", 00:07:50.247 "iscsi_set_options", 00:07:50.247 "iscsi_get_auth_groups", 00:07:50.247 "iscsi_auth_group_remove_secret", 00:07:50.247 "iscsi_auth_group_add_secret", 00:07:50.247 "iscsi_delete_auth_group", 00:07:50.247 "iscsi_create_auth_group", 00:07:50.247 "iscsi_set_discovery_auth", 00:07:50.247 "iscsi_get_options", 00:07:50.247 "iscsi_target_node_request_logout", 00:07:50.247 "iscsi_target_node_set_redirect", 00:07:50.247 "iscsi_target_node_set_auth", 00:07:50.247 "iscsi_target_node_add_lun", 00:07:50.247 "iscsi_get_stats", 00:07:50.247 "iscsi_get_connections", 00:07:50.247 "iscsi_portal_group_set_auth", 00:07:50.247 "iscsi_start_portal_group", 00:07:50.247 "iscsi_delete_portal_group", 00:07:50.247 "iscsi_create_portal_group", 00:07:50.247 "iscsi_get_portal_groups", 00:07:50.247 "iscsi_delete_target_node", 00:07:50.247 "iscsi_target_node_remove_pg_ig_maps", 00:07:50.247 "iscsi_target_node_add_pg_ig_maps", 00:07:50.247 "iscsi_create_target_node", 00:07:50.247 "iscsi_get_target_nodes", 00:07:50.247 "iscsi_delete_initiator_group", 00:07:50.247 "iscsi_initiator_group_remove_initiators", 00:07:50.247 "iscsi_initiator_group_add_initiators", 00:07:50.247 "iscsi_create_initiator_group", 00:07:50.247 "iscsi_get_initiator_groups", 00:07:50.247 "nvmf_set_crdt", 00:07:50.247 "nvmf_set_config", 00:07:50.247 "nvmf_set_max_subsystems", 00:07:50.247 "nvmf_stop_mdns_prr", 00:07:50.247 "nvmf_publish_mdns_prr", 00:07:50.247 "nvmf_subsystem_get_listeners", 00:07:50.247 "nvmf_subsystem_get_qpairs", 00:07:50.247 "nvmf_subsystem_get_controllers", 00:07:50.247 "nvmf_get_stats", 00:07:50.247 "nvmf_get_transports", 00:07:50.247 "nvmf_create_transport", 00:07:50.247 "nvmf_get_targets", 00:07:50.247 "nvmf_delete_target", 00:07:50.247 "nvmf_create_target", 00:07:50.247 "nvmf_subsystem_allow_any_host", 00:07:50.247 "nvmf_subsystem_set_keys", 00:07:50.247 "nvmf_subsystem_remove_host", 00:07:50.247 "nvmf_subsystem_add_host", 00:07:50.247 "nvmf_ns_remove_host", 00:07:50.247 "nvmf_ns_add_host", 00:07:50.247 "nvmf_subsystem_remove_ns", 00:07:50.247 "nvmf_subsystem_set_ns_ana_group", 00:07:50.247 "nvmf_subsystem_add_ns", 00:07:50.247 "nvmf_subsystem_listener_set_ana_state", 00:07:50.247 "nvmf_discovery_get_referrals", 00:07:50.247 "nvmf_discovery_remove_referral", 00:07:50.247 "nvmf_discovery_add_referral", 00:07:50.247 "nvmf_subsystem_remove_listener", 00:07:50.247 "nvmf_subsystem_add_listener", 00:07:50.247 "nvmf_delete_subsystem", 00:07:50.247 "nvmf_create_subsystem", 00:07:50.247 "nvmf_get_subsystems", 00:07:50.247 "env_dpdk_get_mem_stats", 00:07:50.247 "nbd_get_disks", 00:07:50.247 "nbd_stop_disk", 00:07:50.247 "nbd_start_disk", 00:07:50.247 "ublk_recover_disk", 00:07:50.247 "ublk_get_disks", 00:07:50.247 "ublk_stop_disk", 00:07:50.247 "ublk_start_disk", 00:07:50.247 "ublk_destroy_target", 00:07:50.247 "ublk_create_target", 00:07:50.247 "virtio_blk_create_transport", 00:07:50.247 "virtio_blk_get_transports", 00:07:50.247 "vhost_controller_set_coalescing", 00:07:50.247 "vhost_get_controllers", 00:07:50.247 "vhost_delete_controller", 00:07:50.247 "vhost_create_blk_controller", 00:07:50.247 "vhost_scsi_controller_remove_target", 00:07:50.247 "vhost_scsi_controller_add_target", 00:07:50.247 "vhost_start_scsi_controller", 00:07:50.247 "vhost_create_scsi_controller", 00:07:50.247 "thread_set_cpumask", 00:07:50.247 "scheduler_set_options", 00:07:50.247 "framework_get_governor", 00:07:50.247 "framework_get_scheduler", 00:07:50.247 "framework_set_scheduler", 00:07:50.247 "framework_get_reactors", 00:07:50.247 "thread_get_io_channels", 00:07:50.247 "thread_get_pollers", 00:07:50.247 "thread_get_stats", 00:07:50.247 "framework_monitor_context_switch", 00:07:50.247 "spdk_kill_instance", 00:07:50.247 "log_enable_timestamps", 00:07:50.247 "log_get_flags", 00:07:50.247 "log_clear_flag", 00:07:50.247 "log_set_flag", 00:07:50.247 "log_get_level", 00:07:50.247 "log_set_level", 00:07:50.247 "log_get_print_level", 00:07:50.247 "log_set_print_level", 00:07:50.247 "framework_enable_cpumask_locks", 00:07:50.247 "framework_disable_cpumask_locks", 00:07:50.247 "framework_wait_init", 00:07:50.247 "framework_start_init", 00:07:50.247 "scsi_get_devices", 00:07:50.247 "bdev_get_histogram", 00:07:50.247 "bdev_enable_histogram", 00:07:50.247 "bdev_set_qos_limit", 00:07:50.247 "bdev_set_qd_sampling_period", 00:07:50.247 "bdev_get_bdevs", 00:07:50.247 "bdev_reset_iostat", 00:07:50.247 "bdev_get_iostat", 00:07:50.247 "bdev_examine", 00:07:50.247 "bdev_wait_for_examine", 00:07:50.247 "bdev_set_options", 00:07:50.247 "accel_get_stats", 00:07:50.247 "accel_set_options", 00:07:50.247 "accel_set_driver", 00:07:50.247 "accel_crypto_key_destroy", 00:07:50.247 "accel_crypto_keys_get", 00:07:50.247 "accel_crypto_key_create", 00:07:50.247 "accel_assign_opc", 00:07:50.247 "accel_get_module_info", 00:07:50.247 "accel_get_opc_assignments", 00:07:50.247 "vmd_rescan", 00:07:50.248 "vmd_remove_device", 00:07:50.248 "vmd_enable", 00:07:50.248 "sock_get_default_impl", 00:07:50.248 "sock_set_default_impl", 00:07:50.248 "sock_impl_set_options", 00:07:50.248 "sock_impl_get_options", 00:07:50.248 "iobuf_get_stats", 00:07:50.248 "iobuf_set_options", 00:07:50.248 "keyring_get_keys", 00:07:50.248 "vfu_tgt_set_base_path", 00:07:50.248 "framework_get_pci_devices", 00:07:50.248 "framework_get_config", 00:07:50.248 "framework_get_subsystems", 00:07:50.248 "fsdev_set_opts", 00:07:50.248 "fsdev_get_opts", 00:07:50.248 "trace_get_info", 00:07:50.248 "trace_get_tpoint_group_mask", 00:07:50.248 "trace_disable_tpoint_group", 00:07:50.248 "trace_enable_tpoint_group", 00:07:50.248 "trace_clear_tpoint_mask", 00:07:50.248 "trace_set_tpoint_mask", 00:07:50.248 "notify_get_notifications", 00:07:50.248 "notify_get_types", 00:07:50.248 "spdk_get_version", 00:07:50.248 "rpc_get_methods" 00:07:50.248 ] 00:07:50.248 09:28:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.248 09:28:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:50.248 09:28:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1411377 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1411377 ']' 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1411377 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.248 09:28:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1411377 00:07:50.506 09:28:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.506 09:28:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.506 09:28:45 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1411377' 00:07:50.506 killing process with pid 1411377 00:07:50.506 09:28:45 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1411377 00:07:50.506 09:28:45 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1411377 00:07:51.072 00:07:51.072 real 0m2.067s 00:07:51.072 user 0m3.909s 00:07:51.072 sys 0m0.581s 00:07:51.072 09:28:45 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.072 09:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.072 ************************************ 00:07:51.072 END TEST spdkcli_tcp 00:07:51.072 ************************************ 00:07:51.072 09:28:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.072 09:28:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.072 09:28:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.072 09:28:45 -- common/autotest_common.sh@10 -- # set +x 00:07:51.072 ************************************ 00:07:51.072 START TEST dpdk_mem_utility 00:07:51.072 ************************************ 00:07:51.072 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.072 * Looking for test storage... 00:07:51.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:51.072 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:51.072 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:51.072 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:51.072 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.072 09:28:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.073 09:28:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.073 --rc genhtml_branch_coverage=1 00:07:51.073 --rc genhtml_function_coverage=1 00:07:51.073 --rc genhtml_legend=1 00:07:51.073 --rc geninfo_all_blocks=1 00:07:51.073 --rc geninfo_unexecuted_blocks=1 00:07:51.073 00:07:51.073 ' 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.073 --rc genhtml_branch_coverage=1 00:07:51.073 --rc genhtml_function_coverage=1 00:07:51.073 --rc genhtml_legend=1 00:07:51.073 --rc geninfo_all_blocks=1 00:07:51.073 --rc geninfo_unexecuted_blocks=1 00:07:51.073 00:07:51.073 ' 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.073 --rc genhtml_branch_coverage=1 00:07:51.073 --rc genhtml_function_coverage=1 00:07:51.073 --rc genhtml_legend=1 00:07:51.073 --rc geninfo_all_blocks=1 00:07:51.073 --rc geninfo_unexecuted_blocks=1 00:07:51.073 00:07:51.073 ' 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.073 --rc genhtml_branch_coverage=1 00:07:51.073 --rc genhtml_function_coverage=1 00:07:51.073 --rc genhtml_legend=1 00:07:51.073 --rc geninfo_all_blocks=1 00:07:51.073 --rc geninfo_unexecuted_blocks=1 00:07:51.073 00:07:51.073 ' 00:07:51.073 09:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:51.073 09:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1411718 00:07:51.073 09:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.073 09:28:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1411718 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1411718 ']' 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.073 09:28:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 [2024-10-07 09:28:45.960235] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:51.331 [2024-10-07 09:28:45.960423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411718 ] 00:07:51.331 [2024-10-07 09:28:46.050889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.589 [2024-10-07 09:28:46.178724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.848 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.848 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:51.848 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:51.848 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:51.848 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.848 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:51.848 { 00:07:51.848 "filename": "/tmp/spdk_mem_dump.txt" 00:07:51.848 } 00:07:51.848 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.848 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:51.848 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:51.848 1 heaps totaling size 860.000000 MiB 00:07:51.848 size: 860.000000 MiB heap id: 0 00:07:51.848 end heaps---------- 00:07:51.848 9 mempools totaling size 642.649841 MiB 00:07:51.848 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:51.848 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:51.848 size: 92.545471 MiB name: bdev_io_1411718 00:07:51.848 size: 51.011292 MiB name: evtpool_1411718 00:07:51.848 size: 50.003479 MiB name: msgpool_1411718 00:07:51.848 size: 36.509338 MiB name: fsdev_io_1411718 00:07:51.848 size: 21.763794 MiB name: PDU_Pool 00:07:51.848 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:51.848 size: 0.026123 MiB name: Session_Pool 00:07:51.848 end mempools------- 00:07:51.848 6 memzones totaling size 4.142822 MiB 00:07:51.848 size: 1.000366 MiB name: RG_ring_0_1411718 00:07:51.848 size: 1.000366 MiB name: RG_ring_1_1411718 00:07:51.848 size: 1.000366 MiB name: RG_ring_4_1411718 00:07:51.848 size: 1.000366 MiB name: RG_ring_5_1411718 00:07:51.848 size: 0.125366 MiB name: RG_ring_2_1411718 00:07:51.848 size: 0.015991 MiB name: RG_ring_3_1411718 00:07:51.848 end memzones------- 00:07:51.848 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:51.848 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:07:51.848 list of free elements. size: 13.984680 MiB 00:07:51.848 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:51.848 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:51.848 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:51.848 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:51.848 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:51.848 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:51.848 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:51.848 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:51.848 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:51.848 element at address: 0x20001d800000 with size: 0.582886 MiB 00:07:51.848 element at address: 0x200003e00000 with size: 0.495422 MiB 00:07:51.848 element at address: 0x20000d800000 with size: 0.490723 MiB 00:07:51.848 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:51.848 element at address: 0x200007000000 with size: 0.481934 MiB 00:07:51.848 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:07:51.848 element at address: 0x200003a00000 with size: 0.355042 MiB 00:07:51.848 list of standard malloc elements. size: 199.218628 MiB 00:07:51.848 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:51.848 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:51.848 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:51.848 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:51.848 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:51.848 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:51.848 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:51.848 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:51.848 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:51.848 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:51.848 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:51.848 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:07:51.848 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:51.848 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:07:51.848 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003aff940 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003eff000 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:51.849 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:51.849 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:51.849 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:51.849 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:51.849 list of memzone associated elements. size: 646.796692 MiB 00:07:51.849 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:51.849 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:51.849 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:51.849 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:51.849 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:51.849 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1411718_0 00:07:51.849 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:51.849 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1411718_0 00:07:51.849 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:51.849 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1411718_0 00:07:51.849 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:51.849 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1411718_0 00:07:51.849 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:51.849 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:51.849 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:51.849 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:51.849 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:51.849 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1411718 00:07:51.849 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:51.849 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1411718 00:07:51.849 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:51.849 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1411718 00:07:51.849 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:51.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:51.849 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:51.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:51.849 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:51.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:51.849 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:51.849 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:51.849 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:51.849 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1411718 00:07:51.849 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:51.849 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1411718 00:07:51.849 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:51.849 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1411718 00:07:51.849 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:51.849 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1411718 00:07:51.849 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:07:51.849 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1411718 00:07:51.849 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:07:51.849 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1411718 00:07:51.849 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:51.849 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:51.849 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:51.849 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:51.849 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:51.849 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:51.849 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:07:51.849 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1411718 00:07:51.849 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:51.849 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:51.849 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:07:51.849 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:51.849 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:07:51.849 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1411718 00:07:51.849 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:07:51.849 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:51.849 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:51.849 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1411718 00:07:51.849 element at address: 0x200003affa00 with size: 0.000305 MiB 00:07:51.849 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1411718 00:07:51.849 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:07:51.849 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1411718 00:07:51.849 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:07:51.849 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:51.849 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:51.849 09:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1411718 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1411718 ']' 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1411718 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1411718 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1411718' 00:07:51.849 killing process with pid 1411718 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1411718 00:07:51.849 09:28:46 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1411718 00:07:52.415 00:07:52.415 real 0m1.480s 00:07:52.415 user 0m1.474s 00:07:52.415 sys 0m0.535s 00:07:52.415 09:28:47 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.415 09:28:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:52.415 ************************************ 00:07:52.415 END TEST dpdk_mem_utility 00:07:52.415 ************************************ 00:07:52.415 09:28:47 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:52.415 09:28:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.415 09:28:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.415 09:28:47 -- common/autotest_common.sh@10 -- # set +x 00:07:52.415 ************************************ 00:07:52.415 START TEST event 00:07:52.415 ************************************ 00:07:52.415 09:28:47 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:52.673 * Looking for test storage... 00:07:52.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:52.673 09:28:47 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:52.673 09:28:47 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:52.673 09:28:47 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:52.673 09:28:47 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:52.673 09:28:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.673 09:28:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.673 09:28:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.673 09:28:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.673 09:28:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.673 09:28:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.673 09:28:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.673 09:28:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.673 09:28:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.673 09:28:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.673 09:28:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.673 09:28:47 event -- scripts/common.sh@344 -- # case "$op" in 00:07:52.673 09:28:47 event -- scripts/common.sh@345 -- # : 1 00:07:52.673 09:28:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.673 09:28:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.673 09:28:47 event -- scripts/common.sh@365 -- # decimal 1 00:07:52.673 09:28:47 event -- scripts/common.sh@353 -- # local d=1 00:07:52.673 09:28:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.673 09:28:47 event -- scripts/common.sh@355 -- # echo 1 00:07:52.673 09:28:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.673 09:28:47 event -- scripts/common.sh@366 -- # decimal 2 00:07:52.673 09:28:47 event -- scripts/common.sh@353 -- # local d=2 00:07:52.673 09:28:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.673 09:28:47 event -- scripts/common.sh@355 -- # echo 2 00:07:52.673 09:28:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.673 09:28:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.673 09:28:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.673 09:28:47 event -- scripts/common.sh@368 -- # return 0 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.674 --rc genhtml_branch_coverage=1 00:07:52.674 --rc genhtml_function_coverage=1 00:07:52.674 --rc genhtml_legend=1 00:07:52.674 --rc geninfo_all_blocks=1 00:07:52.674 --rc geninfo_unexecuted_blocks=1 00:07:52.674 00:07:52.674 ' 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.674 --rc genhtml_branch_coverage=1 00:07:52.674 --rc genhtml_function_coverage=1 00:07:52.674 --rc genhtml_legend=1 00:07:52.674 --rc geninfo_all_blocks=1 00:07:52.674 --rc geninfo_unexecuted_blocks=1 00:07:52.674 00:07:52.674 ' 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.674 --rc genhtml_branch_coverage=1 00:07:52.674 --rc genhtml_function_coverage=1 00:07:52.674 --rc genhtml_legend=1 00:07:52.674 --rc geninfo_all_blocks=1 00:07:52.674 --rc geninfo_unexecuted_blocks=1 00:07:52.674 00:07:52.674 ' 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.674 --rc genhtml_branch_coverage=1 00:07:52.674 --rc genhtml_function_coverage=1 00:07:52.674 --rc genhtml_legend=1 00:07:52.674 --rc geninfo_all_blocks=1 00:07:52.674 --rc geninfo_unexecuted_blocks=1 00:07:52.674 00:07:52.674 ' 00:07:52.674 09:28:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:52.674 09:28:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:52.674 09:28:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:52.674 09:28:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.932 09:28:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.932 ************************************ 00:07:52.932 START TEST event_perf 00:07:52.932 ************************************ 00:07:52.932 09:28:47 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:52.932 Running I/O for 1 seconds...[2024-10-07 09:28:47.545715] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:52.932 [2024-10-07 09:28:47.545825] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411921 ] 00:07:52.932 [2024-10-07 09:28:47.626852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.189 [2024-10-07 09:28:47.755662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.189 [2024-10-07 09:28:47.755740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.189 [2024-10-07 09:28:47.755791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.190 [2024-10-07 09:28:47.755795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.121 Running I/O for 1 seconds... 00:07:54.121 lcore 0: 231057 00:07:54.121 lcore 1: 231057 00:07:54.121 lcore 2: 231057 00:07:54.121 lcore 3: 231056 00:07:54.121 done. 00:07:54.121 00:07:54.121 real 0m1.354s 00:07:54.121 user 0m4.235s 00:07:54.121 sys 0m0.113s 00:07:54.121 09:28:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.121 09:28:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.121 ************************************ 00:07:54.121 END TEST event_perf 00:07:54.121 ************************************ 00:07:54.121 09:28:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:54.121 09:28:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:54.121 09:28:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.121 09:28:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.380 ************************************ 00:07:54.380 START TEST event_reactor 00:07:54.380 ************************************ 00:07:54.380 09:28:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:54.380 [2024-10-07 09:28:48.958121] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:54.380 [2024-10-07 09:28:48.958211] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412197 ] 00:07:54.380 [2024-10-07 09:28:49.031933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.380 [2024-10-07 09:28:49.152141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.754 test_start 00:07:55.754 oneshot 00:07:55.754 tick 100 00:07:55.754 tick 100 00:07:55.754 tick 250 00:07:55.754 tick 100 00:07:55.754 tick 100 00:07:55.754 tick 100 00:07:55.754 tick 250 00:07:55.754 tick 500 00:07:55.754 tick 100 00:07:55.754 tick 100 00:07:55.754 tick 250 00:07:55.754 tick 100 00:07:55.754 tick 100 00:07:55.754 test_end 00:07:55.754 00:07:55.754 real 0m1.339s 00:07:55.754 user 0m1.242s 00:07:55.754 sys 0m0.091s 00:07:55.754 09:28:50 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.754 09:28:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:55.754 ************************************ 00:07:55.754 END TEST event_reactor 00:07:55.754 ************************************ 00:07:55.754 09:28:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:55.754 09:28:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.754 09:28:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.754 09:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.754 ************************************ 00:07:55.754 START TEST event_reactor_perf 00:07:55.754 ************************************ 00:07:55.754 09:28:50 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:55.754 [2024-10-07 09:28:50.375756] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:55.754 [2024-10-07 09:28:50.375923] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412357 ] 00:07:55.754 [2024-10-07 09:28:50.467683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.012 [2024-10-07 09:28:50.602719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.948 test_start 00:07:56.948 test_end 00:07:56.948 Performance: 354726 events per second 00:07:56.948 00:07:56.948 real 0m1.379s 00:07:56.948 user 0m1.263s 00:07:56.948 sys 0m0.109s 00:07:56.948 09:28:51 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.948 09:28:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.948 ************************************ 00:07:56.948 END TEST event_reactor_perf 00:07:56.948 ************************************ 00:07:56.948 09:28:51 event -- event/event.sh@49 -- # uname -s 00:07:57.207 09:28:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:57.207 09:28:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:57.207 09:28:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.207 09:28:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.207 09:28:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.207 ************************************ 00:07:57.207 START TEST event_scheduler 00:07:57.207 ************************************ 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:57.207 * Looking for test storage... 00:07:57.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.207 09:28:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.207 --rc genhtml_branch_coverage=1 00:07:57.207 --rc genhtml_function_coverage=1 00:07:57.207 --rc genhtml_legend=1 00:07:57.207 --rc geninfo_all_blocks=1 00:07:57.207 --rc geninfo_unexecuted_blocks=1 00:07:57.207 00:07:57.207 ' 00:07:57.207 09:28:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.207 --rc genhtml_branch_coverage=1 00:07:57.207 --rc genhtml_function_coverage=1 00:07:57.207 --rc genhtml_legend=1 00:07:57.207 --rc geninfo_all_blocks=1 00:07:57.207 --rc geninfo_unexecuted_blocks=1 00:07:57.208 00:07:57.208 ' 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.208 --rc genhtml_branch_coverage=1 00:07:57.208 --rc genhtml_function_coverage=1 00:07:57.208 --rc genhtml_legend=1 00:07:57.208 --rc geninfo_all_blocks=1 00:07:57.208 --rc geninfo_unexecuted_blocks=1 00:07:57.208 00:07:57.208 ' 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.208 --rc genhtml_branch_coverage=1 00:07:57.208 --rc genhtml_function_coverage=1 00:07:57.208 --rc genhtml_legend=1 00:07:57.208 --rc geninfo_all_blocks=1 00:07:57.208 --rc geninfo_unexecuted_blocks=1 00:07:57.208 00:07:57.208 ' 00:07:57.208 09:28:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:57.208 09:28:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1412551 00:07:57.208 09:28:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:57.208 09:28:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.208 09:28:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1412551 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1412551 ']' 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.208 09:28:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.466 [2024-10-07 09:28:52.042182] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:07:57.466 [2024-10-07 09:28:52.042326] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412551 ] 00:07:57.466 [2024-10-07 09:28:52.125070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.466 [2024-10-07 09:28:52.246584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.466 [2024-10-07 09:28:52.246645] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.466 [2024-10-07 09:28:52.246712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.466 [2024-10-07 09:28:52.246716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:57.724 09:28:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 [2024-10-07 09:28:52.311570] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:57.724 [2024-10-07 09:28:52.311596] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:57.724 [2024-10-07 09:28:52.311630] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:57.724 [2024-10-07 09:28:52.311641] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:57.724 [2024-10-07 09:28:52.311651] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 [2024-10-07 09:28:52.411768] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 ************************************ 00:07:57.724 START TEST scheduler_create_thread 00:07:57.724 ************************************ 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 2 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 3 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 4 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 5 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 6 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.724 7 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.724 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.725 8 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.725 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 9 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 10 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 09:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.916 09:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.916 09:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:58.916 09:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.916 09:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.287 09:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.287 09:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:00.287 09:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:00.288 09:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.288 09:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.220 09:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.220 00:08:01.220 real 0m3.383s 00:08:01.220 user 0m0.010s 00:08:01.220 sys 0m0.006s 00:08:01.220 09:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.220 09:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.220 ************************************ 00:08:01.220 END TEST scheduler_create_thread 00:08:01.220 ************************************ 00:08:01.220 09:28:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:01.220 09:28:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1412551 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1412551 ']' 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1412551 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412551 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412551' 00:08:01.220 killing process with pid 1412551 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1412551 00:08:01.220 09:28:55 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1412551 00:08:01.478 [2024-10-07 09:28:56.220794] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:01.737 00:08:01.737 real 0m4.735s 00:08:01.737 user 0m8.249s 00:08:01.737 sys 0m0.437s 00:08:01.737 09:28:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.737 09:28:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.737 ************************************ 00:08:01.737 END TEST event_scheduler 00:08:01.737 ************************************ 00:08:01.996 09:28:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:01.996 09:28:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:01.996 09:28:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.996 09:28:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.996 09:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 ************************************ 00:08:01.996 START TEST app_repeat 00:08:01.996 ************************************ 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1413135 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1413135' 00:08:01.996 Process app_repeat pid: 1413135 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:01.996 spdk_app_start Round 0 00:08:01.996 09:28:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1413135 /var/tmp/spdk-nbd.sock 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1413135 ']' 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:01.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.996 09:28:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 [2024-10-07 09:28:56.649085] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:01.996 [2024-10-07 09:28:56.649148] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413135 ] 00:08:01.996 [2024-10-07 09:28:56.740291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.254 [2024-10-07 09:28:56.866048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.254 [2024-10-07 09:28:56.866056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.254 09:28:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.254 09:28:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:02.254 09:28:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.512 Malloc0 00:08:02.512 09:28:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:03.455 Malloc1 00:08:03.455 09:28:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.455 09:28:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:04.021 /dev/nbd0 00:08:04.021 09:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.021 09:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.021 1+0 records in 00:08:04.021 1+0 records out 00:08:04.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018957 s, 21.6 MB/s 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.021 09:28:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:04.021 09:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.021 09:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.021 09:28:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:04.277 /dev/nbd1 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.277 1+0 records in 00:08:04.277 1+0 records out 00:08:04.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269046 s, 15.2 MB/s 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.277 09:28:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.277 09:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.839 09:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:04.839 { 00:08:04.839 "nbd_device": "/dev/nbd0", 00:08:04.839 "bdev_name": "Malloc0" 00:08:04.839 }, 00:08:04.839 { 00:08:04.839 "nbd_device": "/dev/nbd1", 00:08:04.839 "bdev_name": "Malloc1" 00:08:04.840 } 00:08:04.840 ]' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:04.840 { 00:08:04.840 "nbd_device": "/dev/nbd0", 00:08:04.840 "bdev_name": "Malloc0" 00:08:04.840 }, 00:08:04.840 { 00:08:04.840 "nbd_device": "/dev/nbd1", 00:08:04.840 "bdev_name": "Malloc1" 00:08:04.840 } 00:08:04.840 ]' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:04.840 /dev/nbd1' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:04.840 /dev/nbd1' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:04.840 256+0 records in 00:08:04.840 256+0 records out 00:08:04.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00880952 s, 119 MB/s 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:04.840 256+0 records in 00:08:04.840 256+0 records out 00:08:04.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260425 s, 40.3 MB/s 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.840 256+0 records in 00:08:04.840 256+0 records out 00:08:04.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232611 s, 45.1 MB/s 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.840 09:28:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.770 09:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.336 09:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.593 09:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.594 09:29:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.594 09:29:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:07.159 09:29:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:07.418 [2024-10-07 09:29:01.980609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.418 [2024-10-07 09:29:02.097837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.418 [2024-10-07 09:29:02.097837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.418 [2024-10-07 09:29:02.160874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:07.418 [2024-10-07 09:29:02.160965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:09.945 09:29:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:09.945 09:29:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:09.945 spdk_app_start Round 1 00:08:09.945 09:29:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1413135 /var/tmp/spdk-nbd.sock 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1413135 ']' 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.945 09:29:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:10.512 09:29:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.512 09:29:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:10.512 09:29:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.113 Malloc0 00:08:11.113 09:29:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.704 Malloc1 00:08:11.704 09:29:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.704 09:29:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.704 09:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.704 09:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:11.704 09:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.704 09:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.705 09:29:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:12.270 /dev/nbd0 00:08:12.270 09:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.270 09:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.270 1+0 records in 00:08:12.270 1+0 records out 00:08:12.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167019 s, 24.5 MB/s 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:12.270 09:29:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:12.270 09:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.270 09:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.270 09:29:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:12.836 /dev/nbd1 00:08:12.836 09:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:12.836 09:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.836 1+0 records in 00:08:12.836 1+0 records out 00:08:12.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235807 s, 17.4 MB/s 00:08:12.836 09:29:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:12.837 09:29:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:12.837 09:29:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:12.837 09:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:12.837 09:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:12.837 09:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.837 09:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.837 09:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.837 09:29:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.837 09:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:13.768 { 00:08:13.768 "nbd_device": "/dev/nbd0", 00:08:13.768 "bdev_name": "Malloc0" 00:08:13.768 }, 00:08:13.768 { 00:08:13.768 "nbd_device": "/dev/nbd1", 00:08:13.768 "bdev_name": "Malloc1" 00:08:13.768 } 00:08:13.768 ]' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:13.768 { 00:08:13.768 "nbd_device": "/dev/nbd0", 00:08:13.768 "bdev_name": "Malloc0" 00:08:13.768 }, 00:08:13.768 { 00:08:13.768 "nbd_device": "/dev/nbd1", 00:08:13.768 "bdev_name": "Malloc1" 00:08:13.768 } 00:08:13.768 ]' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:13.768 /dev/nbd1' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:13.768 /dev/nbd1' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:13.768 256+0 records in 00:08:13.768 256+0 records out 00:08:13.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760322 s, 138 MB/s 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:13.768 256+0 records in 00:08:13.768 256+0 records out 00:08:13.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264457 s, 39.7 MB/s 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:13.768 256+0 records in 00:08:13.768 256+0 records out 00:08:13.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238694 s, 43.9 MB/s 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.768 09:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.025 09:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.283 09:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.850 09:29:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.850 09:29:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:15.109 09:29:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:15.367 [2024-10-07 09:29:10.076272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.626 [2024-10-07 09:29:10.198502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.626 [2024-10-07 09:29:10.198516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.626 [2024-10-07 09:29:10.263182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.626 [2024-10-07 09:29:10.263262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:18.148 09:29:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:18.148 09:29:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:18.148 spdk_app_start Round 2 00:08:18.148 09:29:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1413135 /var/tmp/spdk-nbd.sock 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1413135 ']' 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:18.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.148 09:29:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.712 09:29:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.712 09:29:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:18.712 09:29:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:19.276 Malloc0 00:08:19.533 09:29:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:19.791 Malloc1 00:08:19.791 09:29:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:19.791 09:29:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:20.726 /dev/nbd0 00:08:20.726 09:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:20.726 09:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:20.726 1+0 records in 00:08:20.726 1+0 records out 00:08:20.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336423 s, 12.2 MB/s 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:20.726 09:29:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:20.726 09:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:20.726 09:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:20.726 09:29:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:20.985 /dev/nbd1 00:08:21.243 09:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:21.243 09:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:21.243 09:29:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:21.243 09:29:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:21.243 09:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:21.243 09:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:21.243 09:29:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:21.244 1+0 records in 00:08:21.244 1+0 records out 00:08:21.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181204 s, 22.6 MB/s 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:21.244 09:29:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:21.244 09:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.244 09:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:21.244 09:29:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.244 09:29:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.244 09:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:21.502 { 00:08:21.502 "nbd_device": "/dev/nbd0", 00:08:21.502 "bdev_name": "Malloc0" 00:08:21.502 }, 00:08:21.502 { 00:08:21.502 "nbd_device": "/dev/nbd1", 00:08:21.502 "bdev_name": "Malloc1" 00:08:21.502 } 00:08:21.502 ]' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:21.502 { 00:08:21.502 "nbd_device": "/dev/nbd0", 00:08:21.502 "bdev_name": "Malloc0" 00:08:21.502 }, 00:08:21.502 { 00:08:21.502 "nbd_device": "/dev/nbd1", 00:08:21.502 "bdev_name": "Malloc1" 00:08:21.502 } 00:08:21.502 ]' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:21.502 /dev/nbd1' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:21.502 /dev/nbd1' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:21.502 256+0 records in 00:08:21.502 256+0 records out 00:08:21.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902968 s, 116 MB/s 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:21.502 256+0 records in 00:08:21.502 256+0 records out 00:08:21.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228456 s, 45.9 MB/s 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:21.502 256+0 records in 00:08:21.502 256+0 records out 00:08:21.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227797 s, 46.0 MB/s 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.502 09:29:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.761 09:29:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.328 09:29:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.586 09:29:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:23.151 09:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:23.409 09:29:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:23.409 09:29:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:23.667 09:29:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:23.926 [2024-10-07 09:29:18.658496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.184 [2024-10-07 09:29:18.780107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.184 [2024-10-07 09:29:18.780107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.184 [2024-10-07 09:29:18.843879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:24.184 [2024-10-07 09:29:18.843958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:26.712 09:29:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1413135 /var/tmp/spdk-nbd.sock 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1413135 ']' 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:26.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.712 09:29:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:26.970 09:29:21 event.app_repeat -- event/event.sh@39 -- # killprocess 1413135 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1413135 ']' 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1413135 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.970 09:29:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413135 00:08:27.228 09:29:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.228 09:29:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.228 09:29:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413135' 00:08:27.228 killing process with pid 1413135 00:08:27.228 09:29:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1413135 00:08:27.228 09:29:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1413135 00:08:27.486 spdk_app_start is called in Round 0. 00:08:27.486 Shutdown signal received, stop current app iteration 00:08:27.486 Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 reinitialization... 00:08:27.486 spdk_app_start is called in Round 1. 00:08:27.486 Shutdown signal received, stop current app iteration 00:08:27.486 Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 reinitialization... 00:08:27.486 spdk_app_start is called in Round 2. 00:08:27.486 Shutdown signal received, stop current app iteration 00:08:27.486 Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 reinitialization... 00:08:27.486 spdk_app_start is called in Round 3. 00:08:27.487 Shutdown signal received, stop current app iteration 00:08:27.487 09:29:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:27.487 09:29:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:27.487 00:08:27.487 real 0m25.451s 00:08:27.487 user 0m59.071s 00:08:27.487 sys 0m4.921s 00:08:27.487 09:29:22 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.487 09:29:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:27.487 ************************************ 00:08:27.487 END TEST app_repeat 00:08:27.487 ************************************ 00:08:27.487 09:29:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:27.487 09:29:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:27.487 09:29:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.487 09:29:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.487 09:29:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:27.487 ************************************ 00:08:27.487 START TEST cpu_locks 00:08:27.487 ************************************ 00:08:27.487 09:29:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:27.487 * Looking for test storage... 00:08:27.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:27.487 09:29:22 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.487 09:29:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.487 09:29:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.744 09:29:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.744 --rc genhtml_branch_coverage=1 00:08:27.744 --rc genhtml_function_coverage=1 00:08:27.744 --rc genhtml_legend=1 00:08:27.744 --rc geninfo_all_blocks=1 00:08:27.744 --rc geninfo_unexecuted_blocks=1 00:08:27.744 00:08:27.744 ' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.744 --rc genhtml_branch_coverage=1 00:08:27.744 --rc genhtml_function_coverage=1 00:08:27.744 --rc genhtml_legend=1 00:08:27.744 --rc geninfo_all_blocks=1 00:08:27.744 --rc geninfo_unexecuted_blocks=1 00:08:27.744 00:08:27.744 ' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.744 --rc genhtml_branch_coverage=1 00:08:27.744 --rc genhtml_function_coverage=1 00:08:27.744 --rc genhtml_legend=1 00:08:27.744 --rc geninfo_all_blocks=1 00:08:27.744 --rc geninfo_unexecuted_blocks=1 00:08:27.744 00:08:27.744 ' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.744 --rc genhtml_branch_coverage=1 00:08:27.744 --rc genhtml_function_coverage=1 00:08:27.744 --rc genhtml_legend=1 00:08:27.744 --rc geninfo_all_blocks=1 00:08:27.744 --rc geninfo_unexecuted_blocks=1 00:08:27.744 00:08:27.744 ' 00:08:27.744 09:29:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:27.744 09:29:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:27.744 09:29:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:27.744 09:29:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.744 09:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.744 ************************************ 00:08:27.744 START TEST default_locks 00:08:27.744 ************************************ 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1416356 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1416356 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1416356 ']' 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.745 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.745 [2024-10-07 09:29:22.472094] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:27.745 [2024-10-07 09:29:22.472192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416356 ] 00:08:27.745 [2024-10-07 09:29:22.542835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.002 [2024-10-07 09:29:22.671233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.261 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.261 09:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:28.261 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1416356 00:08:28.261 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1416356 00:08:28.261 09:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:28.519 lslocks: write error 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1416356 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1416356 ']' 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1416356 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416356 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416356' 00:08:28.519 killing process with pid 1416356 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1416356 00:08:28.519 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1416356 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1416356 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1416356 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1416356 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1416356 ']' 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1416356) - No such process 00:08:29.085 ERROR: process (pid: 1416356) is no longer running 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:29.085 00:08:29.085 real 0m1.401s 00:08:29.085 user 0m1.359s 00:08:29.085 sys 0m0.607s 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.085 09:29:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.085 ************************************ 00:08:29.085 END TEST default_locks 00:08:29.085 ************************************ 00:08:29.085 09:29:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:29.086 09:29:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.086 09:29:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.086 09:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.086 ************************************ 00:08:29.086 START TEST default_locks_via_rpc 00:08:29.086 ************************************ 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1416575 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1416575 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1416575 ']' 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.086 09:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.344 [2024-10-07 09:29:23.925018] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:29.344 [2024-10-07 09:29:23.925112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416575 ] 00:08:29.344 [2024-10-07 09:29:23.995252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.344 [2024-10-07 09:29:24.122553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.603 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.603 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:29.603 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:29.603 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.603 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1416575 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1416575 00:08:29.861 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1416575 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1416575 ']' 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1416575 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416575 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416575' 00:08:30.120 killing process with pid 1416575 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1416575 00:08:30.120 09:29:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1416575 00:08:30.687 00:08:30.687 real 0m1.579s 00:08:30.687 user 0m1.553s 00:08:30.687 sys 0m0.659s 00:08:30.687 09:29:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.687 09:29:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.687 ************************************ 00:08:30.687 END TEST default_locks_via_rpc 00:08:30.687 ************************************ 00:08:30.687 09:29:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:30.687 09:29:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.687 09:29:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.687 09:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.946 ************************************ 00:08:30.946 START TEST non_locking_app_on_locked_coremask 00:08:30.946 ************************************ 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1416740 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1416740 /var/tmp/spdk.sock 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1416740 ']' 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.946 09:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.946 [2024-10-07 09:29:25.600760] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:30.946 [2024-10-07 09:29:25.600918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416740 ] 00:08:30.946 [2024-10-07 09:29:25.687590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.204 [2024-10-07 09:29:25.815271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1416869 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1416869 /var/tmp/spdk2.sock 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1416869 ']' 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:31.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.463 09:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:31.463 [2024-10-07 09:29:26.174778] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:31.463 [2024-10-07 09:29:26.174871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416869 ] 00:08:31.463 [2024-10-07 09:29:26.272723] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:31.463 [2024-10-07 09:29:26.272756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.722 [2024-10-07 09:29:26.525704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.288 09:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.288 09:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:32.288 09:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1416740 00:08:32.288 09:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1416740 00:08:32.288 09:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:33.664 lslocks: write error 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1416740 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1416740 ']' 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1416740 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416740 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416740' 00:08:33.664 killing process with pid 1416740 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1416740 00:08:33.664 09:29:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1416740 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1416869 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1416869 ']' 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1416869 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416869 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416869' 00:08:34.598 killing process with pid 1416869 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1416869 00:08:34.598 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1416869 00:08:35.165 00:08:35.165 real 0m4.225s 00:08:35.165 user 0m4.589s 00:08:35.165 sys 0m1.472s 00:08:35.165 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.165 09:29:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 ************************************ 00:08:35.165 END TEST non_locking_app_on_locked_coremask 00:08:35.165 ************************************ 00:08:35.165 09:29:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:35.165 09:29:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.165 09:29:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.165 09:29:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 ************************************ 00:08:35.165 START TEST locking_app_on_unlocked_coremask 00:08:35.165 ************************************ 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1417298 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1417298 /var/tmp/spdk.sock 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417298 ']' 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.165 09:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 [2024-10-07 09:29:29.862344] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:35.165 [2024-10-07 09:29:29.862447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417298 ] 00:08:35.165 [2024-10-07 09:29:29.933320] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:35.165 [2024-10-07 09:29:29.933365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.424 [2024-10-07 09:29:30.059555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1417313 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1417313 /var/tmp/spdk2.sock 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417313 ']' 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:35.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.682 09:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.682 [2024-10-07 09:29:30.420546] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:35.682 [2024-10-07 09:29:30.420632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417313 ] 00:08:35.940 [2024-10-07 09:29:30.517583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.198 [2024-10-07 09:29:30.763090] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.765 09:29:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.765 09:29:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:36.765 09:29:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1417313 00:08:36.765 09:29:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1417313 00:08:36.765 09:29:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.138 lslocks: write error 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1417298 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1417298 ']' 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1417298 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417298 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417298' 00:08:38.138 killing process with pid 1417298 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1417298 00:08:38.138 09:29:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1417298 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1417313 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1417313 ']' 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1417313 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417313 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417313' 00:08:39.072 killing process with pid 1417313 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1417313 00:08:39.072 09:29:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1417313 00:08:39.637 00:08:39.637 real 0m4.466s 00:08:39.637 user 0m4.817s 00:08:39.637 sys 0m1.435s 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.637 ************************************ 00:08:39.637 END TEST locking_app_on_unlocked_coremask 00:08:39.637 ************************************ 00:08:39.637 09:29:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:39.637 09:29:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.637 09:29:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.637 09:29:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.637 ************************************ 00:08:39.637 START TEST locking_app_on_locked_coremask 00:08:39.637 ************************************ 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1417866 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1417866 /var/tmp/spdk.sock 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417866 ']' 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.637 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.637 [2024-10-07 09:29:34.438681] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:39.637 [2024-10-07 09:29:34.438846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417866 ] 00:08:39.895 [2024-10-07 09:29:34.530818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.895 [2024-10-07 09:29:34.653938] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1417887 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1417887 /var/tmp/spdk2.sock 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1417887 /var/tmp/spdk2.sock 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1417887 /var/tmp/spdk2.sock 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417887 ']' 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.153 09:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 [2024-10-07 09:29:35.030586] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:40.411 [2024-10-07 09:29:35.030682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417887 ] 00:08:40.411 [2024-10-07 09:29:35.134855] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1417866 has claimed it. 00:08:40.411 [2024-10-07 09:29:35.134924] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:41.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1417887) - No such process 00:08:41.345 ERROR: process (pid: 1417887) is no longer running 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1417866 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1417866 00:08:41.345 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.912 lslocks: write error 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1417866 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1417866 ']' 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1417866 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417866 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417866' 00:08:41.912 killing process with pid 1417866 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1417866 00:08:41.912 09:29:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1417866 00:08:42.483 00:08:42.483 real 0m2.855s 00:08:42.483 user 0m3.641s 00:08:42.483 sys 0m0.876s 00:08:42.483 09:29:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.483 09:29:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:42.483 ************************************ 00:08:42.483 END TEST locking_app_on_locked_coremask 00:08:42.483 ************************************ 00:08:42.483 09:29:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:42.483 09:29:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.483 09:29:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.483 09:29:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.483 ************************************ 00:08:42.483 START TEST locking_overlapped_coremask 00:08:42.483 ************************************ 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1418199 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1418199 /var/tmp/spdk.sock 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418199 ']' 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.483 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:42.792 [2024-10-07 09:29:37.317965] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:42.792 [2024-10-07 09:29:37.318071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418199 ] 00:08:42.792 [2024-10-07 09:29:37.391325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.792 [2024-10-07 09:29:37.514452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.792 [2024-10-07 09:29:37.514508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.792 [2024-10-07 09:29:37.514514] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1418303 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1418303 /var/tmp/spdk2.sock 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1418303 /var/tmp/spdk2.sock 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1418303 /var/tmp/spdk2.sock 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418303 ']' 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:43.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.077 09:29:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:43.335 [2024-10-07 09:29:37.920095] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:43.335 [2024-10-07 09:29:37.920219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418303 ] 00:08:43.335 [2024-10-07 09:29:38.017752] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1418199 has claimed it. 00:08:43.335 [2024-10-07 09:29:38.017826] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:43.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1418303) - No such process 00:08:43.909 ERROR: process (pid: 1418303) is no longer running 00:08:43.909 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.909 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1418199 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1418199 ']' 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1418199 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418199 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418199' 00:08:44.167 killing process with pid 1418199 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1418199 00:08:44.167 09:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1418199 00:08:44.731 00:08:44.731 real 0m2.046s 00:08:44.731 user 0m5.754s 00:08:44.731 sys 0m0.557s 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.731 ************************************ 00:08:44.731 END TEST locking_overlapped_coremask 00:08:44.731 ************************************ 00:08:44.731 09:29:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:44.731 09:29:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.731 09:29:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.731 09:29:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.731 ************************************ 00:08:44.731 START TEST locking_overlapped_coremask_via_rpc 00:08:44.731 ************************************ 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1418479 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1418479 /var/tmp/spdk.sock 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1418479 ']' 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.731 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.731 [2024-10-07 09:29:39.427545] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:44.731 [2024-10-07 09:29:39.427638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418479 ] 00:08:44.731 [2024-10-07 09:29:39.496667] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.731 [2024-10-07 09:29:39.496718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.989 [2024-10-07 09:29:39.630390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.989 [2024-10-07 09:29:39.630462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.989 [2024-10-07 09:29:39.630466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1418604 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1418604 /var/tmp/spdk2.sock 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1418604 ']' 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:45.245 09:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.245 [2024-10-07 09:29:40.023974] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:45.245 [2024-10-07 09:29:40.024090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418604 ] 00:08:45.505 [2024-10-07 09:29:40.140710] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:45.505 [2024-10-07 09:29:40.140759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.762 [2024-10-07 09:29:40.384315] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.762 [2024-10-07 09:29:40.387978] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.762 [2024-10-07 09:29:40.387981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.327 [2024-10-07 09:29:40.958990] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1418479 has claimed it. 00:08:46.327 request: 00:08:46.327 { 00:08:46.327 "method": "framework_enable_cpumask_locks", 00:08:46.327 "req_id": 1 00:08:46.327 } 00:08:46.327 Got JSON-RPC error response 00:08:46.327 response: 00:08:46.327 { 00:08:46.327 "code": -32603, 00:08:46.327 "message": "Failed to claim CPU core: 2" 00:08:46.327 } 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1418479 /var/tmp/spdk.sock 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1418479 ']' 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.327 09:29:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1418604 /var/tmp/spdk2.sock 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1418604 ']' 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.890 09:29:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:47.455 00:08:47.455 real 0m2.843s 00:08:47.455 user 0m1.922s 00:08:47.455 sys 0m0.262s 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.455 09:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.455 ************************************ 00:08:47.455 END TEST locking_overlapped_coremask_via_rpc 00:08:47.455 ************************************ 00:08:47.455 09:29:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:47.455 09:29:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1418479 ]] 00:08:47.455 09:29:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1418479 00:08:47.455 09:29:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1418479 ']' 00:08:47.455 09:29:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1418479 00:08:47.455 09:29:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:47.455 09:29:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.455 09:29:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418479 00:08:47.712 09:29:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.712 09:29:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.712 09:29:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418479' 00:08:47.712 killing process with pid 1418479 00:08:47.712 09:29:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1418479 00:08:47.713 09:29:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1418479 00:08:47.970 09:29:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1418604 ]] 00:08:47.970 09:29:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1418604 00:08:47.970 09:29:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1418604 ']' 00:08:47.970 09:29:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1418604 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418604 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418604' 00:08:48.228 killing process with pid 1418604 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1418604 00:08:48.228 09:29:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1418604 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1418479 ]] 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1418479 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1418479 ']' 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1418479 00:08:48.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1418479) - No such process 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1418479 is not found' 00:08:48.487 Process with pid 1418479 is not found 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1418604 ]] 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1418604 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1418604 ']' 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1418604 00:08:48.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1418604) - No such process 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1418604 is not found' 00:08:48.487 Process with pid 1418604 is not found 00:08:48.487 09:29:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:48.487 00:08:48.487 real 0m21.165s 00:08:48.487 user 0m39.076s 00:08:48.487 sys 0m7.007s 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.487 09:29:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:48.487 ************************************ 00:08:48.487 END TEST cpu_locks 00:08:48.487 ************************************ 00:08:48.746 00:08:48.746 real 0m56.111s 00:08:48.746 user 1m53.524s 00:08:48.746 sys 0m13.008s 00:08:48.746 09:29:43 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.746 09:29:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:48.746 ************************************ 00:08:48.746 END TEST event 00:08:48.746 ************************************ 00:08:48.746 09:29:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:48.746 09:29:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.746 09:29:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.746 09:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:48.746 ************************************ 00:08:48.746 START TEST thread 00:08:48.746 ************************************ 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:48.746 * Looking for test storage... 00:08:48.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.746 09:29:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.746 09:29:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.746 09:29:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.746 09:29:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.746 09:29:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.746 09:29:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.746 09:29:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.746 09:29:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.746 09:29:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.746 09:29:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.746 09:29:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.746 09:29:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:48.746 09:29:43 thread -- scripts/common.sh@345 -- # : 1 00:08:48.746 09:29:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.746 09:29:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.746 09:29:43 thread -- scripts/common.sh@365 -- # decimal 1 00:08:48.746 09:29:43 thread -- scripts/common.sh@353 -- # local d=1 00:08:48.746 09:29:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.746 09:29:43 thread -- scripts/common.sh@355 -- # echo 1 00:08:48.746 09:29:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.746 09:29:43 thread -- scripts/common.sh@366 -- # decimal 2 00:08:48.746 09:29:43 thread -- scripts/common.sh@353 -- # local d=2 00:08:48.746 09:29:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.746 09:29:43 thread -- scripts/common.sh@355 -- # echo 2 00:08:48.746 09:29:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.746 09:29:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.746 09:29:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.746 09:29:43 thread -- scripts/common.sh@368 -- # return 0 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.746 --rc genhtml_branch_coverage=1 00:08:48.746 --rc genhtml_function_coverage=1 00:08:48.746 --rc genhtml_legend=1 00:08:48.746 --rc geninfo_all_blocks=1 00:08:48.746 --rc geninfo_unexecuted_blocks=1 00:08:48.746 00:08:48.746 ' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.746 --rc genhtml_branch_coverage=1 00:08:48.746 --rc genhtml_function_coverage=1 00:08:48.746 --rc genhtml_legend=1 00:08:48.746 --rc geninfo_all_blocks=1 00:08:48.746 --rc geninfo_unexecuted_blocks=1 00:08:48.746 00:08:48.746 ' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.746 --rc genhtml_branch_coverage=1 00:08:48.746 --rc genhtml_function_coverage=1 00:08:48.746 --rc genhtml_legend=1 00:08:48.746 --rc geninfo_all_blocks=1 00:08:48.746 --rc geninfo_unexecuted_blocks=1 00:08:48.746 00:08:48.746 ' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.746 --rc genhtml_branch_coverage=1 00:08:48.746 --rc genhtml_function_coverage=1 00:08:48.746 --rc genhtml_legend=1 00:08:48.746 --rc geninfo_all_blocks=1 00:08:48.746 --rc geninfo_unexecuted_blocks=1 00:08:48.746 00:08:48.746 ' 00:08:48.746 09:29:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.746 09:29:43 thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.005 ************************************ 00:08:49.005 START TEST thread_poller_perf 00:08:49.005 ************************************ 00:08:49.005 09:29:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:49.005 [2024-10-07 09:29:43.624774] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:49.005 [2024-10-07 09:29:43.624970] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419110 ] 00:08:49.005 [2024-10-07 09:29:43.732435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.263 [2024-10-07 09:29:43.854317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.263 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:50.198 ====================================== 00:08:50.198 busy:2710664519 (cyc) 00:08:50.198 total_run_count: 292000 00:08:50.198 tsc_hz: 2700000000 (cyc) 00:08:50.198 ====================================== 00:08:50.198 poller_cost: 9283 (cyc), 3438 (nsec) 00:08:50.198 00:08:50.198 real 0m1.388s 00:08:50.198 user 0m1.252s 00:08:50.198 sys 0m0.129s 00:08:50.198 09:29:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.198 09:29:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:50.198 ************************************ 00:08:50.198 END TEST thread_poller_perf 00:08:50.198 ************************************ 00:08:50.457 09:29:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.457 09:29:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:50.457 09:29:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.457 09:29:45 thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.457 ************************************ 00:08:50.457 START TEST thread_poller_perf 00:08:50.457 ************************************ 00:08:50.457 09:29:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.457 [2024-10-07 09:29:45.077997] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:50.457 [2024-10-07 09:29:45.078074] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419264 ] 00:08:50.457 [2024-10-07 09:29:45.151110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.457 [2024-10-07 09:29:45.270771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.457 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:51.832 ====================================== 00:08:51.832 busy:2702911816 (cyc) 00:08:51.832 total_run_count: 3849000 00:08:51.832 tsc_hz: 2700000000 (cyc) 00:08:51.832 ====================================== 00:08:51.832 poller_cost: 702 (cyc), 260 (nsec) 00:08:51.832 00:08:51.832 real 0m1.335s 00:08:51.832 user 0m1.239s 00:08:51.832 sys 0m0.090s 00:08:51.832 09:29:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.832 09:29:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.832 ************************************ 00:08:51.832 END TEST thread_poller_perf 00:08:51.832 ************************************ 00:08:51.832 09:29:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:51.832 00:08:51.832 real 0m3.036s 00:08:51.832 user 0m2.661s 00:08:51.832 sys 0m0.380s 00:08:51.832 09:29:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.832 09:29:46 thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.832 ************************************ 00:08:51.832 END TEST thread 00:08:51.832 ************************************ 00:08:51.832 09:29:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:51.832 09:29:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:51.832 09:29:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.832 09:29:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.832 09:29:46 -- common/autotest_common.sh@10 -- # set +x 00:08:51.832 ************************************ 00:08:51.832 START TEST app_cmdline 00:08:51.832 ************************************ 00:08:51.832 09:29:46 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:51.832 * Looking for test storage... 00:08:51.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:51.832 09:29:46 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.832 09:29:46 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.833 09:29:46 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.099 09:29:46 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:52.099 09:29:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.100 09:29:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.100 --rc genhtml_branch_coverage=1 00:08:52.100 --rc genhtml_function_coverage=1 00:08:52.100 --rc genhtml_legend=1 00:08:52.100 --rc geninfo_all_blocks=1 00:08:52.100 --rc geninfo_unexecuted_blocks=1 00:08:52.100 00:08:52.100 ' 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.100 --rc genhtml_branch_coverage=1 00:08:52.100 --rc genhtml_function_coverage=1 00:08:52.100 --rc genhtml_legend=1 00:08:52.100 --rc geninfo_all_blocks=1 00:08:52.100 --rc geninfo_unexecuted_blocks=1 00:08:52.100 00:08:52.100 ' 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.100 --rc genhtml_branch_coverage=1 00:08:52.100 --rc genhtml_function_coverage=1 00:08:52.100 --rc genhtml_legend=1 00:08:52.100 --rc geninfo_all_blocks=1 00:08:52.100 --rc geninfo_unexecuted_blocks=1 00:08:52.100 00:08:52.100 ' 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.100 --rc genhtml_branch_coverage=1 00:08:52.100 --rc genhtml_function_coverage=1 00:08:52.100 --rc genhtml_legend=1 00:08:52.100 --rc geninfo_all_blocks=1 00:08:52.100 --rc geninfo_unexecuted_blocks=1 00:08:52.100 00:08:52.100 ' 00:08:52.100 09:29:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:52.100 09:29:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1419594 00:08:52.100 09:29:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:52.100 09:29:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1419594 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1419594 ']' 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.100 09:29:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:52.100 [2024-10-07 09:29:46.766173] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:52.100 [2024-10-07 09:29:46.766300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419594 ] 00:08:52.100 [2024-10-07 09:29:46.840609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.358 [2024-10-07 09:29:46.964622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.617 09:29:47 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.617 09:29:47 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:52.617 09:29:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:53.183 { 00:08:53.183 "version": "SPDK v25.01-pre git sha1 3d8f4fe53", 00:08:53.183 "fields": { 00:08:53.183 "major": 25, 00:08:53.183 "minor": 1, 00:08:53.183 "patch": 0, 00:08:53.183 "suffix": "-pre", 00:08:53.183 "commit": "3d8f4fe53" 00:08:53.183 } 00:08:53.183 } 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:53.183 09:29:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:53.183 09:29:47 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.748 request: 00:08:53.748 { 00:08:53.748 "method": "env_dpdk_get_mem_stats", 00:08:53.748 "req_id": 1 00:08:53.748 } 00:08:53.748 Got JSON-RPC error response 00:08:53.748 response: 00:08:53.748 { 00:08:53.748 "code": -32601, 00:08:53.748 "message": "Method not found" 00:08:53.748 } 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.748 09:29:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1419594 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1419594 ']' 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1419594 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.748 09:29:48 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419594 00:08:54.005 09:29:48 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.005 09:29:48 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.005 09:29:48 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419594' 00:08:54.005 killing process with pid 1419594 00:08:54.005 09:29:48 app_cmdline -- common/autotest_common.sh@969 -- # kill 1419594 00:08:54.005 09:29:48 app_cmdline -- common/autotest_common.sh@974 -- # wait 1419594 00:08:54.572 00:08:54.572 real 0m2.621s 00:08:54.572 user 0m3.570s 00:08:54.572 sys 0m0.686s 00:08:54.572 09:29:49 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.572 09:29:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:54.572 ************************************ 00:08:54.572 END TEST app_cmdline 00:08:54.572 ************************************ 00:08:54.572 09:29:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:54.572 09:29:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.572 09:29:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.572 09:29:49 -- common/autotest_common.sh@10 -- # set +x 00:08:54.572 ************************************ 00:08:54.572 START TEST version 00:08:54.572 ************************************ 00:08:54.572 09:29:49 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:54.572 * Looking for test storage... 00:08:54.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:54.572 09:29:49 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:54.572 09:29:49 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:54.572 09:29:49 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:54.833 09:29:49 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:54.833 09:29:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.833 09:29:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.833 09:29:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.833 09:29:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.833 09:29:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.833 09:29:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.833 09:29:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.833 09:29:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.833 09:29:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.833 09:29:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.833 09:29:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.833 09:29:49 version -- scripts/common.sh@344 -- # case "$op" in 00:08:54.833 09:29:49 version -- scripts/common.sh@345 -- # : 1 00:08:54.833 09:29:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.833 09:29:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.833 09:29:49 version -- scripts/common.sh@365 -- # decimal 1 00:08:54.833 09:29:49 version -- scripts/common.sh@353 -- # local d=1 00:08:54.833 09:29:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.833 09:29:49 version -- scripts/common.sh@355 -- # echo 1 00:08:54.833 09:29:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.833 09:29:49 version -- scripts/common.sh@366 -- # decimal 2 00:08:54.833 09:29:49 version -- scripts/common.sh@353 -- # local d=2 00:08:54.833 09:29:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.833 09:29:49 version -- scripts/common.sh@355 -- # echo 2 00:08:54.833 09:29:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.833 09:29:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.833 09:29:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.833 09:29:49 version -- scripts/common.sh@368 -- # return 0 00:08:54.833 09:29:49 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.833 09:29:49 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:54.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.833 --rc genhtml_branch_coverage=1 00:08:54.833 --rc genhtml_function_coverage=1 00:08:54.833 --rc genhtml_legend=1 00:08:54.833 --rc geninfo_all_blocks=1 00:08:54.833 --rc geninfo_unexecuted_blocks=1 00:08:54.833 00:08:54.833 ' 00:08:54.833 09:29:49 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:54.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.833 --rc genhtml_branch_coverage=1 00:08:54.833 --rc genhtml_function_coverage=1 00:08:54.833 --rc genhtml_legend=1 00:08:54.833 --rc geninfo_all_blocks=1 00:08:54.833 --rc geninfo_unexecuted_blocks=1 00:08:54.833 00:08:54.833 ' 00:08:54.833 09:29:49 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:54.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.833 --rc genhtml_branch_coverage=1 00:08:54.833 --rc genhtml_function_coverage=1 00:08:54.833 --rc genhtml_legend=1 00:08:54.834 --rc geninfo_all_blocks=1 00:08:54.834 --rc geninfo_unexecuted_blocks=1 00:08:54.834 00:08:54.834 ' 00:08:54.834 09:29:49 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:54.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.834 --rc genhtml_branch_coverage=1 00:08:54.834 --rc genhtml_function_coverage=1 00:08:54.834 --rc genhtml_legend=1 00:08:54.834 --rc geninfo_all_blocks=1 00:08:54.834 --rc geninfo_unexecuted_blocks=1 00:08:54.834 00:08:54.834 ' 00:08:54.834 09:29:49 version -- app/version.sh@17 -- # get_header_version major 00:08:54.834 09:29:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # cut -f2 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:54.834 09:29:49 version -- app/version.sh@17 -- # major=25 00:08:54.834 09:29:49 version -- app/version.sh@18 -- # get_header_version minor 00:08:54.834 09:29:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # cut -f2 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:54.834 09:29:49 version -- app/version.sh@18 -- # minor=1 00:08:54.834 09:29:49 version -- app/version.sh@19 -- # get_header_version patch 00:08:54.834 09:29:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # cut -f2 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:54.834 09:29:49 version -- app/version.sh@19 -- # patch=0 00:08:54.834 09:29:49 version -- app/version.sh@20 -- # get_header_version suffix 00:08:54.834 09:29:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # cut -f2 00:08:54.834 09:29:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:54.834 09:29:49 version -- app/version.sh@20 -- # suffix=-pre 00:08:54.834 09:29:49 version -- app/version.sh@22 -- # version=25.1 00:08:54.834 09:29:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:54.834 09:29:49 version -- app/version.sh@28 -- # version=25.1rc0 00:08:54.834 09:29:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:54.834 09:29:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:54.834 09:29:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:54.834 09:29:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:54.834 00:08:54.834 real 0m0.323s 00:08:54.834 user 0m0.224s 00:08:54.834 sys 0m0.137s 00:08:54.834 09:29:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.834 09:29:49 version -- common/autotest_common.sh@10 -- # set +x 00:08:54.834 ************************************ 00:08:54.834 END TEST version 00:08:54.834 ************************************ 00:08:54.834 09:29:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:54.834 09:29:49 -- spdk/autotest.sh@194 -- # uname -s 00:08:54.834 09:29:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:54.834 09:29:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:54.834 09:29:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:54.834 09:29:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:54.834 09:29:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.834 09:29:49 -- common/autotest_common.sh@10 -- # set +x 00:08:54.834 09:29:49 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:54.834 09:29:49 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:54.834 09:29:49 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:54.834 09:29:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.834 09:29:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.834 09:29:49 -- common/autotest_common.sh@10 -- # set +x 00:08:54.834 ************************************ 00:08:54.834 START TEST nvmf_tcp 00:08:54.834 ************************************ 00:08:54.834 09:29:49 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:55.093 * Looking for test storage... 00:08:55.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.093 09:29:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:55.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.093 --rc genhtml_branch_coverage=1 00:08:55.093 --rc genhtml_function_coverage=1 00:08:55.093 --rc genhtml_legend=1 00:08:55.093 --rc geninfo_all_blocks=1 00:08:55.093 --rc geninfo_unexecuted_blocks=1 00:08:55.093 00:08:55.093 ' 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:55.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.093 --rc genhtml_branch_coverage=1 00:08:55.093 --rc genhtml_function_coverage=1 00:08:55.093 --rc genhtml_legend=1 00:08:55.093 --rc geninfo_all_blocks=1 00:08:55.093 --rc geninfo_unexecuted_blocks=1 00:08:55.093 00:08:55.093 ' 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:55.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.093 --rc genhtml_branch_coverage=1 00:08:55.093 --rc genhtml_function_coverage=1 00:08:55.093 --rc genhtml_legend=1 00:08:55.093 --rc geninfo_all_blocks=1 00:08:55.093 --rc geninfo_unexecuted_blocks=1 00:08:55.093 00:08:55.093 ' 00:08:55.093 09:29:49 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:55.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.093 --rc genhtml_branch_coverage=1 00:08:55.093 --rc genhtml_function_coverage=1 00:08:55.093 --rc genhtml_legend=1 00:08:55.093 --rc geninfo_all_blocks=1 00:08:55.093 --rc geninfo_unexecuted_blocks=1 00:08:55.093 00:08:55.093 ' 00:08:55.093 09:29:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:55.093 09:29:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:55.094 09:29:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:55.094 09:29:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.094 09:29:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.094 09:29:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.094 ************************************ 00:08:55.094 START TEST nvmf_target_core 00:08:55.094 ************************************ 00:08:55.094 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:55.094 * Looking for test storage... 00:08:55.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:55.094 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.094 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:55.094 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.352 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:55.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.353 --rc genhtml_branch_coverage=1 00:08:55.353 --rc genhtml_function_coverage=1 00:08:55.353 --rc genhtml_legend=1 00:08:55.353 --rc geninfo_all_blocks=1 00:08:55.353 --rc geninfo_unexecuted_blocks=1 00:08:55.353 00:08:55.353 ' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:55.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.353 --rc genhtml_branch_coverage=1 00:08:55.353 --rc genhtml_function_coverage=1 00:08:55.353 --rc genhtml_legend=1 00:08:55.353 --rc geninfo_all_blocks=1 00:08:55.353 --rc geninfo_unexecuted_blocks=1 00:08:55.353 00:08:55.353 ' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:55.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.353 --rc genhtml_branch_coverage=1 00:08:55.353 --rc genhtml_function_coverage=1 00:08:55.353 --rc genhtml_legend=1 00:08:55.353 --rc geninfo_all_blocks=1 00:08:55.353 --rc geninfo_unexecuted_blocks=1 00:08:55.353 00:08:55.353 ' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:55.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.353 --rc genhtml_branch_coverage=1 00:08:55.353 --rc genhtml_function_coverage=1 00:08:55.353 --rc genhtml_legend=1 00:08:55.353 --rc geninfo_all_blocks=1 00:08:55.353 --rc geninfo_unexecuted_blocks=1 00:08:55.353 00:08:55.353 ' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.353 09:29:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.353 09:29:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.354 ************************************ 00:08:55.354 START TEST nvmf_abort 00:08:55.354 ************************************ 00:08:55.354 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:55.354 * Looking for test storage... 00:08:55.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.354 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.354 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:08:55.354 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.613 --rc genhtml_branch_coverage=1 00:08:55.613 --rc genhtml_function_coverage=1 00:08:55.613 --rc genhtml_legend=1 00:08:55.613 --rc geninfo_all_blocks=1 00:08:55.613 --rc geninfo_unexecuted_blocks=1 00:08:55.613 00:08:55.613 ' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.613 --rc genhtml_branch_coverage=1 00:08:55.613 --rc genhtml_function_coverage=1 00:08:55.613 --rc genhtml_legend=1 00:08:55.613 --rc geninfo_all_blocks=1 00:08:55.613 --rc geninfo_unexecuted_blocks=1 00:08:55.613 00:08:55.613 ' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.613 --rc genhtml_branch_coverage=1 00:08:55.613 --rc genhtml_function_coverage=1 00:08:55.613 --rc genhtml_legend=1 00:08:55.613 --rc geninfo_all_blocks=1 00:08:55.613 --rc geninfo_unexecuted_blocks=1 00:08:55.613 00:08:55.613 ' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.613 --rc genhtml_branch_coverage=1 00:08:55.613 --rc genhtml_function_coverage=1 00:08:55.613 --rc genhtml_legend=1 00:08:55.613 --rc geninfo_all_blocks=1 00:08:55.613 --rc geninfo_unexecuted_blocks=1 00:08:55.613 00:08:55.613 ' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.613 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.614 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:58.144 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:58.144 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:58.144 Found net devices under 0000:84:00.0: cvl_0_0 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.144 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:58.145 Found net devices under 0000:84:00.1: cvl_0_1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:08:58.145 00:08:58.145 --- 10.0.0.2 ping statistics --- 00:08:58.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.145 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:58.145 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:58.404 00:08:58.404 --- 10.0.0.1 ping statistics --- 00:08:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.404 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:58.404 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1421833 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1421833 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1421833 ']' 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.404 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:58.404 [2024-10-07 09:29:53.110093] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:08:58.404 [2024-10-07 09:29:53.110241] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.663 [2024-10-07 09:29:53.239348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:58.663 [2024-10-07 09:29:53.445020] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.663 [2024-10-07 09:29:53.445089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.663 [2024-10-07 09:29:53.445106] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.663 [2024-10-07 09:29:53.445119] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.663 [2024-10-07 09:29:53.445131] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.663 [2024-10-07 09:29:53.447039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.663 [2024-10-07 09:29:53.447097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.663 [2024-10-07 09:29:53.447101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 [2024-10-07 09:29:54.196508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 Malloc0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 Delay0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 [2024-10-07 09:29:54.264152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.598 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:59.598 [2024-10-07 09:29:54.348679] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:02.132 Initializing NVMe Controllers 00:09:02.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:02.132 controller IO queue size 128 less than required 00:09:02.132 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:02.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:02.132 Initialization complete. Launching workers. 00:09:02.132 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28186 00:09:02.132 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28247, failed to submit 62 00:09:02.132 success 28190, unsuccessful 57, failed 0 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.132 rmmod nvme_tcp 00:09:02.132 rmmod nvme_fabrics 00:09:02.132 rmmod nvme_keyring 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1421833 ']' 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1421833 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1421833 ']' 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1421833 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1421833 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1421833' 00:09:02.132 killing process with pid 1421833 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1421833 00:09:02.132 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1421833 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.391 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.289 00:09:04.289 real 0m8.995s 00:09:04.289 user 0m13.152s 00:09:04.289 sys 0m3.323s 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.289 ************************************ 00:09:04.289 END TEST nvmf_abort 00:09:04.289 ************************************ 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.289 ************************************ 00:09:04.289 START TEST nvmf_ns_hotplug_stress 00:09:04.289 ************************************ 00:09:04.289 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:04.549 * Looking for test storage... 00:09:04.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:04.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.549 --rc genhtml_branch_coverage=1 00:09:04.549 --rc genhtml_function_coverage=1 00:09:04.549 --rc genhtml_legend=1 00:09:04.549 --rc geninfo_all_blocks=1 00:09:04.549 --rc geninfo_unexecuted_blocks=1 00:09:04.549 00:09:04.549 ' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:04.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.549 --rc genhtml_branch_coverage=1 00:09:04.549 --rc genhtml_function_coverage=1 00:09:04.549 --rc genhtml_legend=1 00:09:04.549 --rc geninfo_all_blocks=1 00:09:04.549 --rc geninfo_unexecuted_blocks=1 00:09:04.549 00:09:04.549 ' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:04.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.549 --rc genhtml_branch_coverage=1 00:09:04.549 --rc genhtml_function_coverage=1 00:09:04.549 --rc genhtml_legend=1 00:09:04.549 --rc geninfo_all_blocks=1 00:09:04.549 --rc geninfo_unexecuted_blocks=1 00:09:04.549 00:09:04.549 ' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:04.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.549 --rc genhtml_branch_coverage=1 00:09:04.549 --rc genhtml_function_coverage=1 00:09:04.549 --rc genhtml_legend=1 00:09:04.549 --rc geninfo_all_blocks=1 00:09:04.549 --rc geninfo_unexecuted_blocks=1 00:09:04.549 00:09:04.549 ' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.549 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.550 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.836 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.837 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:07.837 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:07.837 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.837 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.837 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.837 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:07.837 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:07.837 Found net devices under 0000:84:00.0: cvl_0_0 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:07.837 Found net devices under 0000:84:00.1: cvl_0_1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:09:07.837 00:09:07.837 --- 10.0.0.2 ping statistics --- 00:09:07.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.837 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:09:07.837 00:09:07.837 --- 10.0.0.1 ping statistics --- 00:09:07.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.837 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1424443 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1424443 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1424443 ']' 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.837 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.837 [2024-10-07 09:30:02.265856] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:09:07.837 [2024-10-07 09:30:02.265969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.837 [2024-10-07 09:30:02.374469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.837 [2024-10-07 09:30:02.547048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.837 [2024-10-07 09:30:02.547121] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.837 [2024-10-07 09:30:02.547151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.837 [2024-10-07 09:30:02.547173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.837 [2024-10-07 09:30:02.547192] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.838 [2024-10-07 09:30:02.548793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.838 [2024-10-07 09:30:02.548852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.838 [2024-10-07 09:30:02.548856] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:08.095 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.659 [2024-10-07 09:30:03.205887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.659 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.224 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.789 [2024-10-07 09:30:04.491279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.789 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.723 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:10.723 Malloc0 00:09:10.723 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.980 Delay0 00:09:10.981 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.546 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:12.112 NULL1 00:09:12.112 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:12.369 09:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1425021 00:09:12.369 09:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:12.369 09:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:12.369 09:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.742 Read completed with error (sct=0, sc=11) 00:09:13.742 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.999 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:13.999 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:14.257 true 00:09:14.257 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:14.257 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.455 09:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:15.455 09:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:15.750 true 00:09:15.750 09:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:15.750 09:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.351 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.863 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:16.863 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:17.119 true 00:09:17.119 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:17.119 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.681 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.245 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:18.245 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:18.809 true 00:09:18.809 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:18.810 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.743 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.259 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:20.259 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:20.823 true 00:09:20.823 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:20.823 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.405 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.662 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:21.662 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:22.226 true 00:09:22.226 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:22.226 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.789 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.302 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:23.302 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:23.559 true 00:09:23.559 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:23.559 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.816 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.380 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:24.380 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:24.950 true 00:09:24.950 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:24.951 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.322 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.580 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:26.580 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:27.145 true 00:09:27.145 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:27.145 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.778 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:28.778 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:29.036 true 00:09:29.036 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:29.036 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.602 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.118 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:30.118 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:30.376 true 00:09:30.376 09:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:30.376 09:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 09:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.567 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:31.567 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:31.825 true 00:09:31.825 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:31.825 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.759 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.017 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:33.017 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:33.584 true 00:09:33.584 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:33.584 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.956 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:34.956 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:35.521 true 00:09:35.521 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:35.521 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.087 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.346 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:36.346 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:36.605 true 00:09:36.864 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:36.864 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.429 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.687 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:37.687 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:38.254 true 00:09:38.254 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:38.254 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.819 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.078 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:39.078 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:39.744 true 00:09:39.744 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:39.745 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.003 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.260 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:40.260 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:40.826 true 00:09:40.826 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:40.826 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.392 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.908 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:41.908 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:42.166 true 00:09:42.166 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:42.166 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.732 Initializing NVMe Controllers 00:09:42.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.732 Controller IO queue size 128, less than required. 00:09:42.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:42.732 Controller IO queue size 128, less than required. 00:09:42.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:42.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:42.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:42.732 Initialization complete. Launching workers. 00:09:42.732 ======================================================== 00:09:42.732 Latency(us) 00:09:42.732 Device Information : IOPS MiB/s Average min max 00:09:42.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4907.13 2.40 19256.29 2346.70 1194473.82 00:09:42.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14728.97 7.19 8689.98 1760.54 446566.50 00:09:42.732 ======================================================== 00:09:42.732 Total : 19636.10 9.59 11330.54 1760.54 1194473.82 00:09:42.732 00:09:42.732 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.297 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:43.297 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:43.555 true 00:09:43.555 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1425021 00:09:43.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1425021) - No such process 00:09:43.555 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1425021 00:09:43.555 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.119 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.377 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:44.377 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:44.377 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:44.377 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:44.377 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:44.941 null0 00:09:44.941 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:44.941 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:44.941 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:45.198 null1 00:09:45.198 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.198 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.198 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:45.768 null2 00:09:45.768 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.768 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.768 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:46.026 null3 00:09:46.026 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.026 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.026 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:46.591 null4 00:09:46.591 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.592 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.592 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:47.157 null5 00:09:47.157 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.157 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.157 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:47.416 null6 00:09:47.416 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.416 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.416 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:48.351 null7 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1429706 1429707 1429709 1429711 1429713 1429715 1429717 1429719 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.351 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.351 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.351 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.351 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.610 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.868 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.127 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.385 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.643 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.901 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.159 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.159 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.160 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.418 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.677 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.935 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.193 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.193 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.193 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.193 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.193 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.451 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.709 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.709 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.709 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.710 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.710 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.710 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.710 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.710 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.967 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.968 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.968 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.968 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.226 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.226 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.484 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.742 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.000 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.258 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.258 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.516 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.773 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.030 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.287 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.287 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.287 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.287 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.287 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.287 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.544 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.108 rmmod nvme_tcp 00:09:55.108 rmmod nvme_fabrics 00:09:55.108 rmmod nvme_keyring 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1424443 ']' 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1424443 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1424443 ']' 00:09:55.108 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1424443 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1424443 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1424443' 00:09:55.109 killing process with pid 1424443 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1424443 00:09:55.109 09:30:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1424443 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.367 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.899 00:09:57.899 real 0m53.007s 00:09:57.899 user 4m1.917s 00:09:57.899 sys 0m18.078s 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.899 ************************************ 00:09:57.899 END TEST nvmf_ns_hotplug_stress 00:09:57.899 ************************************ 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.899 ************************************ 00:09:57.899 START TEST nvmf_delete_subsystem 00:09:57.899 ************************************ 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:57.899 * Looking for test storage... 00:09:57.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.899 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.900 --rc genhtml_branch_coverage=1 00:09:57.900 --rc genhtml_function_coverage=1 00:09:57.900 --rc genhtml_legend=1 00:09:57.900 --rc geninfo_all_blocks=1 00:09:57.900 --rc geninfo_unexecuted_blocks=1 00:09:57.900 00:09:57.900 ' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.900 --rc genhtml_branch_coverage=1 00:09:57.900 --rc genhtml_function_coverage=1 00:09:57.900 --rc genhtml_legend=1 00:09:57.900 --rc geninfo_all_blocks=1 00:09:57.900 --rc geninfo_unexecuted_blocks=1 00:09:57.900 00:09:57.900 ' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.900 --rc genhtml_branch_coverage=1 00:09:57.900 --rc genhtml_function_coverage=1 00:09:57.900 --rc genhtml_legend=1 00:09:57.900 --rc geninfo_all_blocks=1 00:09:57.900 --rc geninfo_unexecuted_blocks=1 00:09:57.900 00:09:57.900 ' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.900 --rc genhtml_branch_coverage=1 00:09:57.900 --rc genhtml_function_coverage=1 00:09:57.900 --rc genhtml_legend=1 00:09:57.900 --rc geninfo_all_blocks=1 00:09:57.900 --rc geninfo_unexecuted_blocks=1 00:09:57.900 00:09:57.900 ' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.900 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.901 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.901 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:57.901 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:57.901 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.901 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:00.432 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:00.432 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:00.432 Found net devices under 0000:84:00.0: cvl_0_0 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:00.432 Found net devices under 0000:84:00.1: cvl_0_1 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:00.432 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.433 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:10:00.433 00:10:00.433 --- 10.0.0.2 ping statistics --- 00:10:00.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.433 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:00.433 00:10:00.433 --- 10.0.0.1 ping statistics --- 00:10:00.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.433 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1432756 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1432756 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1432756 ']' 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.433 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.433 [2024-10-07 09:30:55.209729] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:00.433 [2024-10-07 09:30:55.209829] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.692 [2024-10-07 09:30:55.286022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.692 [2024-10-07 09:30:55.402125] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.692 [2024-10-07 09:30:55.402199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.692 [2024-10-07 09:30:55.402215] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.692 [2024-10-07 09:30:55.402230] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.692 [2024-10-07 09:30:55.402242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.692 [2024-10-07 09:30:55.403140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.692 [2024-10-07 09:30:55.403149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 [2024-10-07 09:30:55.567481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 [2024-10-07 09:30:55.583704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 NULL1 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 Delay0 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1432811 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:00.950 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:00.950 [2024-10-07 09:30:55.658460] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:02.851 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.851 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.851 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 [2024-10-07 09:30:57.797106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8078000c00 is same with the state(6) to be set 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 starting I/O failed: -6 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Read completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.110 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 starting I/O failed: -6 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 starting I/O failed: -6 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 starting I/O failed: -6 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 starting I/O failed: -6 00:10:03.111 [2024-10-07 09:30:57.797760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1570 is same with the state(6) to be set 00:10:03.111 starting I/O failed: -6 00:10:03.111 starting I/O failed: -6 00:10:03.111 starting I/O failed: -6 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:03.111 Write completed with error (sct=0, sc=8) 00:10:03.111 Read completed with error (sct=0, sc=8) 00:10:04.043 [2024-10-07 09:30:58.755304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d2a70 is same with the state(6) to be set 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 [2024-10-07 09:30:58.797737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1750 is same with the state(6) to be set 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 [2024-10-07 09:30:58.797991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1390 is same with the state(6) to be set 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 [2024-10-07 09:30:58.798357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f807800d7a0 is same with the state(6) to be set 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 Write completed with error (sct=0, sc=8) 00:10:04.043 Read completed with error (sct=0, sc=8) 00:10:04.043 [2024-10-07 09:30:58.800047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f807800cfe0 is same with the state(6) to be set 00:10:04.043 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.043 Initializing NVMe Controllers 00:10:04.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.043 Controller IO queue size 128, less than required. 00:10:04.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:04.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:04.043 Initialization complete. Launching workers. 00:10:04.043 ======================================================== 00:10:04.043 Latency(us) 00:10:04.043 Device Information : IOPS MiB/s Average min max 00:10:04.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.17 0.08 907692.82 420.17 1012962.78 00:10:04.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.19 0.08 909021.78 569.01 1012173.72 00:10:04.043 ======================================================== 00:10:04.043 Total : 331.36 0.16 908351.33 420.17 1012962.78 00:10:04.043 00:10:04.043 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:04.043 [2024-10-07 09:30:58.800806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d2a70 (9): Bad file descriptor 00:10:04.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:04.043 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1432811 00:10:04.043 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1432811 00:10:04.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1432811) - No such process 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1432811 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1432811 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1432811 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.610 [2024-10-07 09:30:59.321990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1433304 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:04.610 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.610 [2024-10-07 09:30:59.379629] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:05.177 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.177 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:05.177 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.742 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.742 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:05.742 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.351 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.351 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:06.351 09:31:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.636 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.636 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:06.636 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.202 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.202 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:07.202 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.766 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.766 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:07.766 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.766 Initializing NVMe Controllers 00:10:07.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.766 Controller IO queue size 128, less than required. 00:10:07.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:07.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:07.767 Initialization complete. Launching workers. 00:10:07.767 ======================================================== 00:10:07.767 Latency(us) 00:10:07.767 Device Information : IOPS MiB/s Average min max 00:10:07.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005173.60 1000231.03 1043255.75 00:10:07.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004576.64 1000213.53 1013074.42 00:10:07.767 ======================================================== 00:10:07.767 Total : 256.00 0.12 1004875.12 1000213.53 1043255.75 00:10:07.767 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1433304 00:10:08.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1433304) - No such process 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1433304 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.333 rmmod nvme_tcp 00:10:08.333 rmmod nvme_fabrics 00:10:08.333 rmmod nvme_keyring 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:08.333 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1432756 ']' 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1432756 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1432756 ']' 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1432756 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1432756 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1432756' 00:10:08.334 killing process with pid 1432756 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1432756 00:10:08.334 09:31:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1432756 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.593 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.493 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.493 00:10:10.493 real 0m13.142s 00:10:10.493 user 0m28.111s 00:10:10.493 sys 0m3.575s 00:10:10.493 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.493 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.493 ************************************ 00:10:10.493 END TEST nvmf_delete_subsystem 00:10:10.493 ************************************ 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.752 ************************************ 00:10:10.752 START TEST nvmf_host_management 00:10:10.752 ************************************ 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:10.752 * Looking for test storage... 00:10:10.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.752 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.012 --rc genhtml_branch_coverage=1 00:10:11.012 --rc genhtml_function_coverage=1 00:10:11.012 --rc genhtml_legend=1 00:10:11.012 --rc geninfo_all_blocks=1 00:10:11.012 --rc geninfo_unexecuted_blocks=1 00:10:11.012 00:10:11.012 ' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.012 --rc genhtml_branch_coverage=1 00:10:11.012 --rc genhtml_function_coverage=1 00:10:11.012 --rc genhtml_legend=1 00:10:11.012 --rc geninfo_all_blocks=1 00:10:11.012 --rc geninfo_unexecuted_blocks=1 00:10:11.012 00:10:11.012 ' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.012 --rc genhtml_branch_coverage=1 00:10:11.012 --rc genhtml_function_coverage=1 00:10:11.012 --rc genhtml_legend=1 00:10:11.012 --rc geninfo_all_blocks=1 00:10:11.012 --rc geninfo_unexecuted_blocks=1 00:10:11.012 00:10:11.012 ' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.012 --rc genhtml_branch_coverage=1 00:10:11.012 --rc genhtml_function_coverage=1 00:10:11.012 --rc genhtml_legend=1 00:10:11.012 --rc geninfo_all_blocks=1 00:10:11.012 --rc geninfo_unexecuted_blocks=1 00:10:11.012 00:10:11.012 ' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.012 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.013 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.546 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.546 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.546 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.546 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:13.547 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:13.547 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:13.547 Found net devices under 0000:84:00.0: cvl_0_0 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:13.547 Found net devices under 0000:84:00.1: cvl_0_1 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.547 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:10:13.548 00:10:13.548 --- 10.0.0.2 ping statistics --- 00:10:13.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.548 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:10:13.548 00:10:13.548 --- 10.0.0.1 ping statistics --- 00:10:13.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.548 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1435687 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1435687 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1435687 ']' 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.548 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.806 [2024-10-07 09:31:08.378817] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:13.806 [2024-10-07 09:31:08.378937] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.806 [2024-10-07 09:31:08.515099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.064 [2024-10-07 09:31:08.648834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.064 [2024-10-07 09:31:08.648914] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.064 [2024-10-07 09:31:08.648941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.064 [2024-10-07 09:31:08.648953] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.064 [2024-10-07 09:31:08.648962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.064 [2024-10-07 09:31:08.650728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.064 [2024-10-07 09:31:08.650812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:14.064 [2024-10-07 09:31:08.650815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.064 [2024-10-07 09:31:08.650754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.064 [2024-10-07 09:31:08.831724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.064 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.322 Malloc0 00:10:14.322 [2024-10-07 09:31:08.901276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1435855 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1435855 /var/tmp/bdevperf.sock 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1435855 ']' 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:14.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:14.322 { 00:10:14.322 "params": { 00:10:14.322 "name": "Nvme$subsystem", 00:10:14.322 "trtype": "$TEST_TRANSPORT", 00:10:14.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.322 "adrfam": "ipv4", 00:10:14.322 "trsvcid": "$NVMF_PORT", 00:10:14.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.322 "hdgst": ${hdgst:-false}, 00:10:14.322 "ddgst": ${ddgst:-false} 00:10:14.322 }, 00:10:14.322 "method": "bdev_nvme_attach_controller" 00:10:14.322 } 00:10:14.322 EOF 00:10:14.322 )") 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:14.322 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:14.322 "params": { 00:10:14.322 "name": "Nvme0", 00:10:14.322 "trtype": "tcp", 00:10:14.322 "traddr": "10.0.0.2", 00:10:14.322 "adrfam": "ipv4", 00:10:14.322 "trsvcid": "4420", 00:10:14.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:14.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:14.322 "hdgst": false, 00:10:14.322 "ddgst": false 00:10:14.322 }, 00:10:14.322 "method": "bdev_nvme_attach_controller" 00:10:14.322 }' 00:10:14.322 [2024-10-07 09:31:09.008368] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:14.322 [2024-10-07 09:31:09.008462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435855 ] 00:10:14.322 [2024-10-07 09:31:09.080795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.581 [2024-10-07 09:31:09.197515] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.840 Running I/O for 10 seconds... 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.840 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.098 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:15.098 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:15.098 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.358 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:15.359 [2024-10-07 09:31:09.977058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.977972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.359 [2024-10-07 09:31:09.978253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.359 [2024-10-07 09:31:09.978267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.978971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.978987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:15.360 [2024-10-07 09:31:09.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.360 [2024-10-07 09:31:09.979014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10003b0 is same with the state(6) to be set 00:10:15.360 [2024-10-07 09:31:09.979085] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10003b0 was disconnected and freed. reset controller. 00:10:15.360 [2024-10-07 09:31:09.980250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:15.360 task offset: 88960 on job bdev=Nvme0n1 fails 00:10:15.360 00:10:15.360 Latency(us) 00:10:15.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:15.360 Job: Nvme0n1 ended in about 0.43 seconds with error 00:10:15.360 Verification LBA range: start 0x0 length 0x400 00:10:15.360 Nvme0n1 : 0.43 1491.86 93.24 149.19 0.00 37951.98 2633.58 33981.63 00:10:15.360 =================================================================================================================== 00:10:15.360 Total : 1491.86 93.24 149.19 0.00 37951.98 2633.58 33981.63 00:10:15.360 [2024-10-07 09:31:09.983261] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.360 [2024-10-07 09:31:09.983293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde72c0 (9): Bad file descriptor 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.360 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:15.360 [2024-10-07 09:31:10.127061] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1435855 00:10:16.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1435855) - No such process 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:16.294 { 00:10:16.294 "params": { 00:10:16.294 "name": "Nvme$subsystem", 00:10:16.294 "trtype": "$TEST_TRANSPORT", 00:10:16.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.294 "adrfam": "ipv4", 00:10:16.294 "trsvcid": "$NVMF_PORT", 00:10:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.294 "hdgst": ${hdgst:-false}, 00:10:16.294 "ddgst": ${ddgst:-false} 00:10:16.294 }, 00:10:16.294 "method": "bdev_nvme_attach_controller" 00:10:16.294 } 00:10:16.294 EOF 00:10:16.294 )") 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:16.294 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:16.294 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:16.294 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:16.294 "params": { 00:10:16.294 "name": "Nvme0", 00:10:16.294 "trtype": "tcp", 00:10:16.294 "traddr": "10.0.0.2", 00:10:16.294 "adrfam": "ipv4", 00:10:16.294 "trsvcid": "4420", 00:10:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:16.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:16.294 "hdgst": false, 00:10:16.294 "ddgst": false 00:10:16.294 }, 00:10:16.294 "method": "bdev_nvme_attach_controller" 00:10:16.294 }' 00:10:16.294 [2024-10-07 09:31:11.064888] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:16.294 [2024-10-07 09:31:11.065050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436133 ] 00:10:16.552 [2024-10-07 09:31:11.162214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.552 [2024-10-07 09:31:11.275612] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.811 Running I/O for 1 seconds... 00:10:17.746 1536.00 IOPS, 96.00 MiB/s 00:10:17.746 Latency(us) 00:10:17.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:17.746 Verification LBA range: start 0x0 length 0x400 00:10:17.746 Nvme0n1 : 1.01 1579.95 98.75 0.00 0.00 39859.31 6844.87 34369.99 00:10:17.746 =================================================================================================================== 00:10:17.746 Total : 1579.95 98.75 0.00 0.00 39859.31 6844.87 34369.99 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.004 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.004 rmmod nvme_tcp 00:10:18.004 rmmod nvme_fabrics 00:10:18.004 rmmod nvme_keyring 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1435687 ']' 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1435687 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1435687 ']' 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1435687 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1435687 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1435687' 00:10:18.263 killing process with pid 1435687 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1435687 00:10:18.263 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1435687 00:10:18.521 [2024-10-07 09:31:13.200049] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.521 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:21.053 00:10:21.053 real 0m9.934s 00:10:21.053 user 0m22.140s 00:10:21.053 sys 0m3.423s 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.053 ************************************ 00:10:21.053 END TEST nvmf_host_management 00:10:21.053 ************************************ 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.053 ************************************ 00:10:21.053 START TEST nvmf_lvol 00:10:21.053 ************************************ 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:21.053 * Looking for test storage... 00:10:21.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.053 --rc genhtml_branch_coverage=1 00:10:21.053 --rc genhtml_function_coverage=1 00:10:21.053 --rc genhtml_legend=1 00:10:21.053 --rc geninfo_all_blocks=1 00:10:21.053 --rc geninfo_unexecuted_blocks=1 00:10:21.053 00:10:21.053 ' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.053 --rc genhtml_branch_coverage=1 00:10:21.053 --rc genhtml_function_coverage=1 00:10:21.053 --rc genhtml_legend=1 00:10:21.053 --rc geninfo_all_blocks=1 00:10:21.053 --rc geninfo_unexecuted_blocks=1 00:10:21.053 00:10:21.053 ' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.053 --rc genhtml_branch_coverage=1 00:10:21.053 --rc genhtml_function_coverage=1 00:10:21.053 --rc genhtml_legend=1 00:10:21.053 --rc geninfo_all_blocks=1 00:10:21.053 --rc geninfo_unexecuted_blocks=1 00:10:21.053 00:10:21.053 ' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:21.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.053 --rc genhtml_branch_coverage=1 00:10:21.053 --rc genhtml_function_coverage=1 00:10:21.053 --rc genhtml_legend=1 00:10:21.053 --rc geninfo_all_blocks=1 00:10:21.053 --rc geninfo_unexecuted_blocks=1 00:10:21.053 00:10:21.053 ' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.053 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.054 09:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.585 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.585 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.585 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.585 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.585 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:23.586 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:23.586 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:23.586 Found net devices under 0000:84:00.0: cvl_0_0 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:23.586 Found net devices under 0000:84:00.1: cvl_0_1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:10:23.586 00:10:23.586 --- 10.0.0.2 ping statistics --- 00:10:23.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.586 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:10:23.586 00:10:23.586 --- 10.0.0.1 ping statistics --- 00:10:23.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.586 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:23.586 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1438368 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1438368 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1438368 ']' 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.587 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.587 [2024-10-07 09:31:18.282249] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:23.587 [2024-10-07 09:31:18.282363] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.587 [2024-10-07 09:31:18.364818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.844 [2024-10-07 09:31:18.483282] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.844 [2024-10-07 09:31:18.483350] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.844 [2024-10-07 09:31:18.483369] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.844 [2024-10-07 09:31:18.483383] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.844 [2024-10-07 09:31:18.483394] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.844 [2024-10-07 09:31:18.484438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.844 [2024-10-07 09:31:18.484491] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.844 [2024-10-07 09:31:18.484509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.844 09:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.409 [2024-10-07 09:31:18.994222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.409 09:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.975 09:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:24.975 09:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.541 09:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:25.541 09:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:25.799 09:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:26.364 09:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d4a35139-45b7-4c7b-89e5-7b0c76f9c496 00:10:26.364 09:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4a35139-45b7-4c7b-89e5-7b0c76f9c496 lvol 20 00:10:26.622 09:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=17a016e1-5eb5-42cb-bcfa-e011c87a4804 00:10:26.622 09:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:26.880 09:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17a016e1-5eb5-42cb-bcfa-e011c87a4804 00:10:27.446 09:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:27.704 [2024-10-07 09:31:22.375084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.704 09:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.267 09:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1438935 00:10:28.267 09:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:28.267 09:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:29.201 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 17a016e1-5eb5-42cb-bcfa-e011c87a4804 MY_SNAPSHOT 00:10:29.479 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4e28b134-06fa-43e1-aad9-5ceb3a456246 00:10:29.479 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 17a016e1-5eb5-42cb-bcfa-e011c87a4804 30 00:10:30.043 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4e28b134-06fa-43e1-aad9-5ceb3a456246 MY_CLONE 00:10:30.606 09:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=efabc547-1f89-44a9-8cc8-face05f77857 00:10:30.606 09:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate efabc547-1f89-44a9-8cc8-face05f77857 00:10:31.538 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1438935 00:10:39.779 Initializing NVMe Controllers 00:10:39.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:39.780 Controller IO queue size 128, less than required. 00:10:39.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:39.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:39.780 Initialization complete. Launching workers. 00:10:39.780 ======================================================== 00:10:39.780 Latency(us) 00:10:39.780 Device Information : IOPS MiB/s Average min max 00:10:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10350.17 40.43 12372.32 2137.34 86591.48 00:10:39.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10204.88 39.86 12548.55 2157.93 61337.04 00:10:39.780 ======================================================== 00:10:39.780 Total : 20555.05 80.29 12459.81 2137.34 86591.48 00:10:39.780 00:10:39.780 09:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:39.780 09:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17a016e1-5eb5-42cb-bcfa-e011c87a4804 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4a35139-45b7-4c7b-89e5-7b0c76f9c496 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.780 rmmod nvme_tcp 00:10:39.780 rmmod nvme_fabrics 00:10:39.780 rmmod nvme_keyring 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1438368 ']' 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1438368 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1438368 ']' 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1438368 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1438368 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1438368' 00:10:39.780 killing process with pid 1438368 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1438368 00:10:39.780 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1438368 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.347 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.250 00:10:42.250 real 0m21.571s 00:10:42.250 user 1m13.062s 00:10:42.250 sys 0m6.346s 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.250 ************************************ 00:10:42.250 END TEST nvmf_lvol 00:10:42.250 ************************************ 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.250 ************************************ 00:10:42.250 START TEST nvmf_lvs_grow 00:10:42.250 ************************************ 00:10:42.250 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:42.250 * Looking for test storage... 00:10:42.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.250 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:42.250 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:10:42.250 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.509 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.510 --rc genhtml_branch_coverage=1 00:10:42.510 --rc genhtml_function_coverage=1 00:10:42.510 --rc genhtml_legend=1 00:10:42.510 --rc geninfo_all_blocks=1 00:10:42.510 --rc geninfo_unexecuted_blocks=1 00:10:42.510 00:10:42.510 ' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.510 --rc genhtml_branch_coverage=1 00:10:42.510 --rc genhtml_function_coverage=1 00:10:42.510 --rc genhtml_legend=1 00:10:42.510 --rc geninfo_all_blocks=1 00:10:42.510 --rc geninfo_unexecuted_blocks=1 00:10:42.510 00:10:42.510 ' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.510 --rc genhtml_branch_coverage=1 00:10:42.510 --rc genhtml_function_coverage=1 00:10:42.510 --rc genhtml_legend=1 00:10:42.510 --rc geninfo_all_blocks=1 00:10:42.510 --rc geninfo_unexecuted_blocks=1 00:10:42.510 00:10:42.510 ' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.510 --rc genhtml_branch_coverage=1 00:10:42.510 --rc genhtml_function_coverage=1 00:10:42.510 --rc genhtml_legend=1 00:10:42.510 --rc geninfo_all_blocks=1 00:10:42.510 --rc geninfo_unexecuted_blocks=1 00:10:42.510 00:10:42.510 ' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:42.510 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.511 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:45.799 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:45.799 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:45.799 Found net devices under 0000:84:00.0: cvl_0_0 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:45.799 Found net devices under 0000:84:00.1: cvl_0_1 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.799 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:10:45.800 00:10:45.800 --- 10.0.0.2 ping statistics --- 00:10:45.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.800 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:10:45.800 00:10:45.800 --- 10.0.0.1 ping statistics --- 00:10:45.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.800 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1442358 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1442358 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1442358 ']' 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.800 [2024-10-07 09:31:40.270647] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:45.800 [2024-10-07 09:31:40.270745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.800 [2024-10-07 09:31:40.339824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.800 [2024-10-07 09:31:40.454323] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.800 [2024-10-07 09:31:40.454394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.800 [2024-10-07 09:31:40.454407] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.800 [2024-10-07 09:31:40.454418] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.800 [2024-10-07 09:31:40.454427] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.800 [2024-10-07 09:31:40.455129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.800 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:46.369 [2024-10-07 09:31:40.905753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.369 ************************************ 00:10:46.369 START TEST lvs_grow_clean 00:10:46.369 ************************************ 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.369 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.627 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:46.627 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:46.885 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5faf19fb-3538-41bd-bbbc-5555a7db255a 00:10:46.885 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:10:46.885 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:47.143 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:47.143 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:47.143 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5faf19fb-3538-41bd-bbbc-5555a7db255a lvol 150 00:10:47.401 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5e5865f-5228-4723-825b-8d46ff2adaae 00:10:47.401 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:47.401 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:47.659 [2024-10-07 09:31:42.453748] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:47.659 [2024-10-07 09:31:42.453836] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:47.659 true 00:10:47.659 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:10:47.659 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:48.225 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:48.225 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:48.483 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5e5865f-5228-4723-825b-8d46ff2adaae 00:10:48.741 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:48.999 [2024-10-07 09:31:43.681497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.999 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1442798 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1442798 /var/tmp/bdevperf.sock 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1442798 ']' 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.258 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:49.258 [2024-10-07 09:31:44.060618] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:10:49.258 [2024-10-07 09:31:44.060700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442798 ] 00:10:49.516 [2024-10-07 09:31:44.128874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.516 [2024-10-07 09:31:44.247131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.774 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.774 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:49.774 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:50.032 Nvme0n1 00:10:50.032 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:50.597 [ 00:10:50.597 { 00:10:50.597 "name": "Nvme0n1", 00:10:50.597 "aliases": [ 00:10:50.597 "d5e5865f-5228-4723-825b-8d46ff2adaae" 00:10:50.597 ], 00:10:50.597 "product_name": "NVMe disk", 00:10:50.597 "block_size": 4096, 00:10:50.597 "num_blocks": 38912, 00:10:50.597 "uuid": "d5e5865f-5228-4723-825b-8d46ff2adaae", 00:10:50.597 "numa_id": 1, 00:10:50.597 "assigned_rate_limits": { 00:10:50.597 "rw_ios_per_sec": 0, 00:10:50.597 "rw_mbytes_per_sec": 0, 00:10:50.597 "r_mbytes_per_sec": 0, 00:10:50.597 "w_mbytes_per_sec": 0 00:10:50.597 }, 00:10:50.597 "claimed": false, 00:10:50.597 "zoned": false, 00:10:50.597 "supported_io_types": { 00:10:50.597 "read": true, 00:10:50.597 "write": true, 00:10:50.597 "unmap": true, 00:10:50.597 "flush": true, 00:10:50.597 "reset": true, 00:10:50.597 "nvme_admin": true, 00:10:50.597 "nvme_io": true, 00:10:50.597 "nvme_io_md": false, 00:10:50.597 "write_zeroes": true, 00:10:50.597 "zcopy": false, 00:10:50.597 "get_zone_info": false, 00:10:50.597 "zone_management": false, 00:10:50.597 "zone_append": false, 00:10:50.597 "compare": true, 00:10:50.597 "compare_and_write": true, 00:10:50.597 "abort": true, 00:10:50.597 "seek_hole": false, 00:10:50.597 "seek_data": false, 00:10:50.597 "copy": true, 00:10:50.597 "nvme_iov_md": false 00:10:50.597 }, 00:10:50.597 "memory_domains": [ 00:10:50.597 { 00:10:50.597 "dma_device_id": "system", 00:10:50.597 "dma_device_type": 1 00:10:50.597 } 00:10:50.597 ], 00:10:50.597 "driver_specific": { 00:10:50.597 "nvme": [ 00:10:50.597 { 00:10:50.597 "trid": { 00:10:50.597 "trtype": "TCP", 00:10:50.597 "adrfam": "IPv4", 00:10:50.597 "traddr": "10.0.0.2", 00:10:50.597 "trsvcid": "4420", 00:10:50.597 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:50.597 }, 00:10:50.597 "ctrlr_data": { 00:10:50.597 "cntlid": 1, 00:10:50.597 "vendor_id": "0x8086", 00:10:50.597 "model_number": "SPDK bdev Controller", 00:10:50.597 "serial_number": "SPDK0", 00:10:50.597 "firmware_revision": "25.01", 00:10:50.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.597 "oacs": { 00:10:50.597 "security": 0, 00:10:50.597 "format": 0, 00:10:50.597 "firmware": 0, 00:10:50.597 "ns_manage": 0 00:10:50.597 }, 00:10:50.597 "multi_ctrlr": true, 00:10:50.597 "ana_reporting": false 00:10:50.597 }, 00:10:50.597 "vs": { 00:10:50.597 "nvme_version": "1.3" 00:10:50.597 }, 00:10:50.597 "ns_data": { 00:10:50.597 "id": 1, 00:10:50.597 "can_share": true 00:10:50.597 } 00:10:50.597 } 00:10:50.597 ], 00:10:50.597 "mp_policy": "active_passive" 00:10:50.597 } 00:10:50.597 } 00:10:50.597 ] 00:10:50.597 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1442941 00:10:50.597 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:50.597 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:50.856 Running I/O for 10 seconds... 00:10:51.790 Latency(us) 00:10:51.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.790 Nvme0n1 : 1.00 13924.00 54.39 0.00 0.00 0.00 0.00 0.00 00:10:51.790 =================================================================================================================== 00:10:51.790 Total : 13924.00 54.39 0.00 0.00 0.00 0.00 0.00 00:10:51.790 00:10:52.725 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:10:52.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.725 Nvme0n1 : 2.00 14074.00 54.98 0.00 0.00 0.00 0.00 0.00 00:10:52.725 =================================================================================================================== 00:10:52.725 Total : 14074.00 54.98 0.00 0.00 0.00 0.00 0.00 00:10:52.725 00:10:52.984 true 00:10:52.984 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:10:52.984 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:53.243 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:53.243 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:53.243 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1442941 00:10:53.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.810 Nvme0n1 : 3.00 14167.00 55.34 0.00 0.00 0.00 0.00 0.00 00:10:53.810 =================================================================================================================== 00:10:53.810 Total : 14167.00 55.34 0.00 0.00 0.00 0.00 0.00 00:10:53.810 00:10:54.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.745 Nvme0n1 : 4.00 14234.50 55.60 0.00 0.00 0.00 0.00 0.00 00:10:54.745 =================================================================================================================== 00:10:54.745 Total : 14234.50 55.60 0.00 0.00 0.00 0.00 0.00 00:10:54.745 00:10:55.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.681 Nvme0n1 : 5.00 14283.80 55.80 0.00 0.00 0.00 0.00 0.00 00:10:55.681 =================================================================================================================== 00:10:55.681 Total : 14283.80 55.80 0.00 0.00 0.00 0.00 0.00 00:10:55.681 00:10:57.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.056 Nvme0n1 : 6.00 14327.50 55.97 0.00 0.00 0.00 0.00 0.00 00:10:57.056 =================================================================================================================== 00:10:57.056 Total : 14327.50 55.97 0.00 0.00 0.00 0.00 0.00 00:10:57.056 00:10:57.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.991 Nvme0n1 : 7.00 14412.71 56.30 0.00 0.00 0.00 0.00 0.00 00:10:57.991 =================================================================================================================== 00:10:57.991 Total : 14412.71 56.30 0.00 0.00 0.00 0.00 0.00 00:10:57.991 00:10:58.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.926 Nvme0n1 : 8.00 14494.00 56.62 0.00 0.00 0.00 0.00 0.00 00:10:58.926 =================================================================================================================== 00:10:58.926 Total : 14494.00 56.62 0.00 0.00 0.00 0.00 0.00 00:10:58.926 00:10:59.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.862 Nvme0n1 : 9.00 14549.78 56.84 0.00 0.00 0.00 0.00 0.00 00:10:59.862 =================================================================================================================== 00:10:59.862 Total : 14549.78 56.84 0.00 0.00 0.00 0.00 0.00 00:10:59.862 00:11:00.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.797 Nvme0n1 : 10.00 14562.10 56.88 0.00 0.00 0.00 0.00 0.00 00:11:00.797 =================================================================================================================== 00:11:00.797 Total : 14562.10 56.88 0.00 0.00 0.00 0.00 0.00 00:11:00.797 00:11:00.797 00:11:00.797 Latency(us) 00:11:00.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.797 Nvme0n1 : 10.01 14567.26 56.90 0.00 0.00 8782.13 3665.16 16990.81 00:11:00.797 =================================================================================================================== 00:11:00.797 Total : 14567.26 56.90 0.00 0.00 8782.13 3665.16 16990.81 00:11:00.797 { 00:11:00.797 "results": [ 00:11:00.797 { 00:11:00.797 "job": "Nvme0n1", 00:11:00.797 "core_mask": "0x2", 00:11:00.797 "workload": "randwrite", 00:11:00.797 "status": "finished", 00:11:00.797 "queue_depth": 128, 00:11:00.797 "io_size": 4096, 00:11:00.797 "runtime": 10.005247, 00:11:00.797 "iops": 14567.256560482714, 00:11:00.797 "mibps": 56.9033459393856, 00:11:00.797 "io_failed": 0, 00:11:00.797 "io_timeout": 0, 00:11:00.797 "avg_latency_us": 8782.125398555558, 00:11:00.797 "min_latency_us": 3665.1614814814816, 00:11:00.797 "max_latency_us": 16990.814814814814 00:11:00.797 } 00:11:00.797 ], 00:11:00.797 "core_count": 1 00:11:00.797 } 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1442798 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1442798 ']' 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1442798 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442798 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442798' 00:11:00.797 killing process with pid 1442798 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1442798 00:11:00.797 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.797 00:11:00.797 Latency(us) 00:11:00.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.797 =================================================================================================================== 00:11:00.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.797 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1442798 00:11:01.056 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:01.622 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.880 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:01.880 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:02.138 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:02.138 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:02.138 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:02.397 [2024-10-07 09:31:57.208117] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:02.656 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:02.914 request: 00:11:02.914 { 00:11:02.914 "uuid": "5faf19fb-3538-41bd-bbbc-5555a7db255a", 00:11:02.914 "method": "bdev_lvol_get_lvstores", 00:11:02.914 "req_id": 1 00:11:02.914 } 00:11:02.914 Got JSON-RPC error response 00:11:02.914 response: 00:11:02.914 { 00:11:02.914 "code": -19, 00:11:02.914 "message": "No such device" 00:11:02.914 } 00:11:02.914 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:02.914 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.914 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.914 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.914 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:03.172 aio_bdev 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5e5865f-5228-4723-825b-8d46ff2adaae 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d5e5865f-5228-4723-825b-8d46ff2adaae 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.172 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:03.430 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5e5865f-5228-4723-825b-8d46ff2adaae -t 2000 00:11:03.993 [ 00:11:03.993 { 00:11:03.993 "name": "d5e5865f-5228-4723-825b-8d46ff2adaae", 00:11:03.993 "aliases": [ 00:11:03.993 "lvs/lvol" 00:11:03.993 ], 00:11:03.993 "product_name": "Logical Volume", 00:11:03.993 "block_size": 4096, 00:11:03.993 "num_blocks": 38912, 00:11:03.993 "uuid": "d5e5865f-5228-4723-825b-8d46ff2adaae", 00:11:03.993 "assigned_rate_limits": { 00:11:03.993 "rw_ios_per_sec": 0, 00:11:03.993 "rw_mbytes_per_sec": 0, 00:11:03.993 "r_mbytes_per_sec": 0, 00:11:03.993 "w_mbytes_per_sec": 0 00:11:03.993 }, 00:11:03.993 "claimed": false, 00:11:03.993 "zoned": false, 00:11:03.993 "supported_io_types": { 00:11:03.993 "read": true, 00:11:03.993 "write": true, 00:11:03.993 "unmap": true, 00:11:03.993 "flush": false, 00:11:03.993 "reset": true, 00:11:03.993 "nvme_admin": false, 00:11:03.993 "nvme_io": false, 00:11:03.993 "nvme_io_md": false, 00:11:03.993 "write_zeroes": true, 00:11:03.993 "zcopy": false, 00:11:03.993 "get_zone_info": false, 00:11:03.993 "zone_management": false, 00:11:03.993 "zone_append": false, 00:11:03.993 "compare": false, 00:11:03.993 "compare_and_write": false, 00:11:03.993 "abort": false, 00:11:03.993 "seek_hole": true, 00:11:03.993 "seek_data": true, 00:11:03.993 "copy": false, 00:11:03.993 "nvme_iov_md": false 00:11:03.993 }, 00:11:03.993 "driver_specific": { 00:11:03.993 "lvol": { 00:11:03.993 "lvol_store_uuid": "5faf19fb-3538-41bd-bbbc-5555a7db255a", 00:11:03.993 "base_bdev": "aio_bdev", 00:11:03.993 "thin_provision": false, 00:11:03.993 "num_allocated_clusters": 38, 00:11:03.993 "snapshot": false, 00:11:03.993 "clone": false, 00:11:03.993 "esnap_clone": false 00:11:03.993 } 00:11:03.993 } 00:11:03.993 } 00:11:03.993 ] 00:11:03.993 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:03.993 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:03.993 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:04.251 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:04.251 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:04.251 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:04.509 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:04.509 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5e5865f-5228-4723-825b-8d46ff2adaae 00:11:04.766 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5faf19fb-3538-41bd-bbbc-5555a7db255a 00:11:05.332 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.589 00:11:05.589 real 0m19.267s 00:11:05.589 user 0m19.226s 00:11:05.589 sys 0m2.031s 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 ************************************ 00:11:05.589 END TEST lvs_grow_clean 00:11:05.589 ************************************ 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 ************************************ 00:11:05.589 START TEST lvs_grow_dirty 00:11:05.589 ************************************ 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.589 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.590 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.565 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:06.565 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:06.565 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:06.565 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:06.565 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:06.854 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:07.111 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:07.111 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 lvol 150 00:11:07.369 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=666bffb4-0677-4155-af81-2b61e2e71c39 00:11:07.369 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.369 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:07.628 [2024-10-07 09:32:02.281988] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:07.628 [2024-10-07 09:32:02.282083] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:07.628 true 00:11:07.628 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:07.628 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:07.887 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:07.887 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.145 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 666bffb4-0677-4155-af81-2b61e2e71c39 00:11:08.711 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:08.969 [2024-10-07 09:32:03.557902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.969 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1445128 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1445128 /var/tmp/bdevperf.sock 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1445128 ']' 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.227 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:09.227 [2024-10-07 09:32:03.941151] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:09.227 [2024-10-07 09:32:03.941244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445128 ] 00:11:09.227 [2024-10-07 09:32:04.008017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.485 [2024-10-07 09:32:04.126205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.485 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.485 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:09.485 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:10.051 Nvme0n1 00:11:10.051 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:10.617 [ 00:11:10.617 { 00:11:10.617 "name": "Nvme0n1", 00:11:10.617 "aliases": [ 00:11:10.617 "666bffb4-0677-4155-af81-2b61e2e71c39" 00:11:10.617 ], 00:11:10.617 "product_name": "NVMe disk", 00:11:10.617 "block_size": 4096, 00:11:10.617 "num_blocks": 38912, 00:11:10.617 "uuid": "666bffb4-0677-4155-af81-2b61e2e71c39", 00:11:10.617 "numa_id": 1, 00:11:10.617 "assigned_rate_limits": { 00:11:10.617 "rw_ios_per_sec": 0, 00:11:10.617 "rw_mbytes_per_sec": 0, 00:11:10.617 "r_mbytes_per_sec": 0, 00:11:10.617 "w_mbytes_per_sec": 0 00:11:10.617 }, 00:11:10.617 "claimed": false, 00:11:10.617 "zoned": false, 00:11:10.617 "supported_io_types": { 00:11:10.617 "read": true, 00:11:10.617 "write": true, 00:11:10.617 "unmap": true, 00:11:10.617 "flush": true, 00:11:10.617 "reset": true, 00:11:10.617 "nvme_admin": true, 00:11:10.617 "nvme_io": true, 00:11:10.617 "nvme_io_md": false, 00:11:10.617 "write_zeroes": true, 00:11:10.617 "zcopy": false, 00:11:10.617 "get_zone_info": false, 00:11:10.617 "zone_management": false, 00:11:10.617 "zone_append": false, 00:11:10.617 "compare": true, 00:11:10.617 "compare_and_write": true, 00:11:10.617 "abort": true, 00:11:10.617 "seek_hole": false, 00:11:10.617 "seek_data": false, 00:11:10.617 "copy": true, 00:11:10.617 "nvme_iov_md": false 00:11:10.617 }, 00:11:10.617 "memory_domains": [ 00:11:10.617 { 00:11:10.617 "dma_device_id": "system", 00:11:10.617 "dma_device_type": 1 00:11:10.617 } 00:11:10.617 ], 00:11:10.617 "driver_specific": { 00:11:10.617 "nvme": [ 00:11:10.617 { 00:11:10.617 "trid": { 00:11:10.617 "trtype": "TCP", 00:11:10.617 "adrfam": "IPv4", 00:11:10.617 "traddr": "10.0.0.2", 00:11:10.617 "trsvcid": "4420", 00:11:10.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:10.617 }, 00:11:10.617 "ctrlr_data": { 00:11:10.617 "cntlid": 1, 00:11:10.617 "vendor_id": "0x8086", 00:11:10.617 "model_number": "SPDK bdev Controller", 00:11:10.617 "serial_number": "SPDK0", 00:11:10.617 "firmware_revision": "25.01", 00:11:10.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:10.617 "oacs": { 00:11:10.617 "security": 0, 00:11:10.617 "format": 0, 00:11:10.617 "firmware": 0, 00:11:10.617 "ns_manage": 0 00:11:10.617 }, 00:11:10.617 "multi_ctrlr": true, 00:11:10.617 "ana_reporting": false 00:11:10.617 }, 00:11:10.617 "vs": { 00:11:10.617 "nvme_version": "1.3" 00:11:10.617 }, 00:11:10.617 "ns_data": { 00:11:10.617 "id": 1, 00:11:10.617 "can_share": true 00:11:10.617 } 00:11:10.617 } 00:11:10.617 ], 00:11:10.617 "mp_policy": "active_passive" 00:11:10.617 } 00:11:10.617 } 00:11:10.617 ] 00:11:10.617 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1445379 00:11:10.617 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:10.617 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.876 Running I/O for 10 seconds... 00:11:11.810 Latency(us) 00:11:11.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.810 Nvme0n1 : 1.00 14038.00 54.84 0.00 0.00 0.00 0.00 0.00 00:11:11.810 =================================================================================================================== 00:11:11.810 Total : 14038.00 54.84 0.00 0.00 0.00 0.00 0.00 00:11:11.810 00:11:12.746 09:32:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:12.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.746 Nvme0n1 : 2.00 14322.00 55.95 0.00 0.00 0.00 0.00 0.00 00:11:12.746 =================================================================================================================== 00:11:12.746 Total : 14322.00 55.95 0.00 0.00 0.00 0.00 0.00 00:11:12.746 00:11:13.005 true 00:11:13.005 09:32:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:13.005 09:32:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:13.264 09:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:13.264 09:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:13.264 09:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1445379 00:11:13.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.830 Nvme0n1 : 3.00 14374.00 56.15 0.00 0.00 0.00 0.00 0.00 00:11:13.830 =================================================================================================================== 00:11:13.831 Total : 14374.00 56.15 0.00 0.00 0.00 0.00 0.00 00:11:13.831 00:11:14.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.765 Nvme0n1 : 4.00 14417.75 56.32 0.00 0.00 0.00 0.00 0.00 00:11:14.765 =================================================================================================================== 00:11:14.765 Total : 14417.75 56.32 0.00 0.00 0.00 0.00 0.00 00:11:14.765 00:11:16.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.140 Nvme0n1 : 5.00 14421.60 56.33 0.00 0.00 0.00 0.00 0.00 00:11:16.140 =================================================================================================================== 00:11:16.140 Total : 14421.60 56.33 0.00 0.00 0.00 0.00 0.00 00:11:16.140 00:11:17.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.073 Nvme0n1 : 6.00 14463.17 56.50 0.00 0.00 0.00 0.00 0.00 00:11:17.073 =================================================================================================================== 00:11:17.073 Total : 14463.17 56.50 0.00 0.00 0.00 0.00 0.00 00:11:17.073 00:11:18.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.008 Nvme0n1 : 7.00 14556.29 56.86 0.00 0.00 0.00 0.00 0.00 00:11:18.008 =================================================================================================================== 00:11:18.008 Total : 14556.29 56.86 0.00 0.00 0.00 0.00 0.00 00:11:18.008 00:11:18.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.944 Nvme0n1 : 8.00 14610.25 57.07 0.00 0.00 0.00 0.00 0.00 00:11:18.944 =================================================================================================================== 00:11:18.944 Total : 14610.25 57.07 0.00 0.00 0.00 0.00 0.00 00:11:18.944 00:11:19.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.879 Nvme0n1 : 9.00 14637.89 57.18 0.00 0.00 0.00 0.00 0.00 00:11:19.879 =================================================================================================================== 00:11:19.879 Total : 14637.89 57.18 0.00 0.00 0.00 0.00 0.00 00:11:19.879 00:11:20.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.814 Nvme0n1 : 10.00 14647.30 57.22 0.00 0.00 0.00 0.00 0.00 00:11:20.814 =================================================================================================================== 00:11:20.814 Total : 14647.30 57.22 0.00 0.00 0.00 0.00 0.00 00:11:20.814 00:11:20.814 00:11:20.814 Latency(us) 00:11:20.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.814 Nvme0n1 : 10.01 14650.51 57.23 0.00 0.00 8732.30 2233.08 17379.18 00:11:20.814 =================================================================================================================== 00:11:20.814 Total : 14650.51 57.23 0.00 0.00 8732.30 2233.08 17379.18 00:11:20.814 { 00:11:20.814 "results": [ 00:11:20.814 { 00:11:20.814 "job": "Nvme0n1", 00:11:20.814 "core_mask": "0x2", 00:11:20.814 "workload": "randwrite", 00:11:20.814 "status": "finished", 00:11:20.814 "queue_depth": 128, 00:11:20.814 "io_size": 4096, 00:11:20.814 "runtime": 10.006544, 00:11:20.814 "iops": 14650.512704486184, 00:11:20.814 "mibps": 57.228565251899155, 00:11:20.814 "io_failed": 0, 00:11:20.814 "io_timeout": 0, 00:11:20.814 "avg_latency_us": 8732.302735421692, 00:11:20.814 "min_latency_us": 2233.0785185185186, 00:11:20.814 "max_latency_us": 17379.176296296297 00:11:20.814 } 00:11:20.814 ], 00:11:20.814 "core_count": 1 00:11:20.814 } 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1445128 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1445128 ']' 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1445128 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.814 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1445128 00:11:21.073 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:21.073 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:21.073 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1445128' 00:11:21.073 killing process with pid 1445128 00:11:21.073 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1445128 00:11:21.073 Received shutdown signal, test time was about 10.000000 seconds 00:11:21.073 00:11:21.073 Latency(us) 00:11:21.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.073 =================================================================================================================== 00:11:21.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:21.073 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1445128 00:11:21.331 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.589 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:21.847 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:21.847 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1442358 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1442358 00:11:22.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1442358 Killed "${NVMF_APP[@]}" "$@" 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1446721 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1446721 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1446721 ']' 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.106 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.365 [2024-10-07 09:32:16.980175] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:22.365 [2024-10-07 09:32:16.980342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.365 [2024-10-07 09:32:17.087541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.624 [2024-10-07 09:32:17.210168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.624 [2024-10-07 09:32:17.210227] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.624 [2024-10-07 09:32:17.210245] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.624 [2024-10-07 09:32:17.210259] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.624 [2024-10-07 09:32:17.210270] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.624 [2024-10-07 09:32:17.211011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.624 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:22.882 [2024-10-07 09:32:17.653788] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:22.882 [2024-10-07 09:32:17.653960] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:22.882 [2024-10-07 09:32:17.654019] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 666bffb4-0677-4155-af81-2b61e2e71c39 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=666bffb4-0677-4155-af81-2b61e2e71c39 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.882 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:23.449 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 666bffb4-0677-4155-af81-2b61e2e71c39 -t 2000 00:11:23.711 [ 00:11:23.711 { 00:11:23.711 "name": "666bffb4-0677-4155-af81-2b61e2e71c39", 00:11:23.711 "aliases": [ 00:11:23.711 "lvs/lvol" 00:11:23.711 ], 00:11:23.711 "product_name": "Logical Volume", 00:11:23.711 "block_size": 4096, 00:11:23.711 "num_blocks": 38912, 00:11:23.711 "uuid": "666bffb4-0677-4155-af81-2b61e2e71c39", 00:11:23.711 "assigned_rate_limits": { 00:11:23.711 "rw_ios_per_sec": 0, 00:11:23.711 "rw_mbytes_per_sec": 0, 00:11:23.711 "r_mbytes_per_sec": 0, 00:11:23.711 "w_mbytes_per_sec": 0 00:11:23.711 }, 00:11:23.711 "claimed": false, 00:11:23.711 "zoned": false, 00:11:23.711 "supported_io_types": { 00:11:23.711 "read": true, 00:11:23.711 "write": true, 00:11:23.711 "unmap": true, 00:11:23.711 "flush": false, 00:11:23.711 "reset": true, 00:11:23.711 "nvme_admin": false, 00:11:23.711 "nvme_io": false, 00:11:23.711 "nvme_io_md": false, 00:11:23.711 "write_zeroes": true, 00:11:23.711 "zcopy": false, 00:11:23.711 "get_zone_info": false, 00:11:23.711 "zone_management": false, 00:11:23.711 "zone_append": false, 00:11:23.711 "compare": false, 00:11:23.711 "compare_and_write": false, 00:11:23.711 "abort": false, 00:11:23.711 "seek_hole": true, 00:11:23.711 "seek_data": true, 00:11:23.711 "copy": false, 00:11:23.711 "nvme_iov_md": false 00:11:23.711 }, 00:11:23.711 "driver_specific": { 00:11:23.711 "lvol": { 00:11:23.711 "lvol_store_uuid": "b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2", 00:11:23.711 "base_bdev": "aio_bdev", 00:11:23.711 "thin_provision": false, 00:11:23.711 "num_allocated_clusters": 38, 00:11:23.711 "snapshot": false, 00:11:23.711 "clone": false, 00:11:23.711 "esnap_clone": false 00:11:23.711 } 00:11:23.711 } 00:11:23.711 } 00:11:23.711 ] 00:11:23.711 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:23.711 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:23.711 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:23.972 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:23.972 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:23.972 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:24.229 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:24.229 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:24.796 [2024-10-07 09:32:19.319886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:24.796 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:24.796 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.797 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:25.055 request: 00:11:25.055 { 00:11:25.055 "uuid": "b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2", 00:11:25.055 "method": "bdev_lvol_get_lvstores", 00:11:25.055 "req_id": 1 00:11:25.055 } 00:11:25.055 Got JSON-RPC error response 00:11:25.055 response: 00:11:25.055 { 00:11:25.055 "code": -19, 00:11:25.055 "message": "No such device" 00:11:25.055 } 00:11:25.055 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:25.055 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.055 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.055 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.055 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:25.313 aio_bdev 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 666bffb4-0677-4155-af81-2b61e2e71c39 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=666bffb4-0677-4155-af81-2b61e2e71c39 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.313 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:25.572 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 666bffb4-0677-4155-af81-2b61e2e71c39 -t 2000 00:11:25.830 [ 00:11:25.830 { 00:11:25.830 "name": "666bffb4-0677-4155-af81-2b61e2e71c39", 00:11:25.830 "aliases": [ 00:11:25.830 "lvs/lvol" 00:11:25.830 ], 00:11:25.830 "product_name": "Logical Volume", 00:11:25.830 "block_size": 4096, 00:11:25.830 "num_blocks": 38912, 00:11:25.830 "uuid": "666bffb4-0677-4155-af81-2b61e2e71c39", 00:11:25.830 "assigned_rate_limits": { 00:11:25.830 "rw_ios_per_sec": 0, 00:11:25.830 "rw_mbytes_per_sec": 0, 00:11:25.830 "r_mbytes_per_sec": 0, 00:11:25.830 "w_mbytes_per_sec": 0 00:11:25.830 }, 00:11:25.830 "claimed": false, 00:11:25.830 "zoned": false, 00:11:25.830 "supported_io_types": { 00:11:25.830 "read": true, 00:11:25.830 "write": true, 00:11:25.830 "unmap": true, 00:11:25.830 "flush": false, 00:11:25.830 "reset": true, 00:11:25.830 "nvme_admin": false, 00:11:25.830 "nvme_io": false, 00:11:25.830 "nvme_io_md": false, 00:11:25.830 "write_zeroes": true, 00:11:25.830 "zcopy": false, 00:11:25.830 "get_zone_info": false, 00:11:25.830 "zone_management": false, 00:11:25.830 "zone_append": false, 00:11:25.830 "compare": false, 00:11:25.830 "compare_and_write": false, 00:11:25.830 "abort": false, 00:11:25.830 "seek_hole": true, 00:11:25.830 "seek_data": true, 00:11:25.830 "copy": false, 00:11:25.830 "nvme_iov_md": false 00:11:25.830 }, 00:11:25.830 "driver_specific": { 00:11:25.830 "lvol": { 00:11:25.830 "lvol_store_uuid": "b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2", 00:11:25.830 "base_bdev": "aio_bdev", 00:11:25.830 "thin_provision": false, 00:11:25.830 "num_allocated_clusters": 38, 00:11:25.830 "snapshot": false, 00:11:25.830 "clone": false, 00:11:25.830 "esnap_clone": false 00:11:25.830 } 00:11:25.830 } 00:11:25.830 } 00:11:25.830 ] 00:11:25.830 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:25.830 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:25.830 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:26.395 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:26.653 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:26.653 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:27.220 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:27.220 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 666bffb4-0677-4155-af81-2b61e2e71c39 00:11:27.479 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b88c87de-d57b-4fa3-9a7f-37d81a2fd1d2 00:11:27.738 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:28.305 00:11:28.305 real 0m22.569s 00:11:28.305 user 0m55.734s 00:11:28.305 sys 0m5.205s 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.305 ************************************ 00:11:28.305 END TEST lvs_grow_dirty 00:11:28.305 ************************************ 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:28.305 nvmf_trace.0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.305 rmmod nvme_tcp 00:11:28.305 rmmod nvme_fabrics 00:11:28.305 rmmod nvme_keyring 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1446721 ']' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1446721 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1446721 ']' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1446721 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.305 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1446721 00:11:28.305 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.305 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.305 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1446721' 00:11:28.305 killing process with pid 1446721 00:11:28.305 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1446721 00:11:28.305 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1446721 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.564 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.565 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.565 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.565 09:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.100 00:11:31.100 real 0m48.397s 00:11:31.100 user 1m22.802s 00:11:31.100 sys 0m9.946s 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.100 ************************************ 00:11:31.100 END TEST nvmf_lvs_grow 00:11:31.100 ************************************ 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.100 ************************************ 00:11:31.100 START TEST nvmf_bdev_io_wait 00:11:31.100 ************************************ 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:31.100 * Looking for test storage... 00:11:31.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.100 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:31.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.101 --rc genhtml_branch_coverage=1 00:11:31.101 --rc genhtml_function_coverage=1 00:11:31.101 --rc genhtml_legend=1 00:11:31.101 --rc geninfo_all_blocks=1 00:11:31.101 --rc geninfo_unexecuted_blocks=1 00:11:31.101 00:11:31.101 ' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:31.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.101 --rc genhtml_branch_coverage=1 00:11:31.101 --rc genhtml_function_coverage=1 00:11:31.101 --rc genhtml_legend=1 00:11:31.101 --rc geninfo_all_blocks=1 00:11:31.101 --rc geninfo_unexecuted_blocks=1 00:11:31.101 00:11:31.101 ' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:31.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.101 --rc genhtml_branch_coverage=1 00:11:31.101 --rc genhtml_function_coverage=1 00:11:31.101 --rc genhtml_legend=1 00:11:31.101 --rc geninfo_all_blocks=1 00:11:31.101 --rc geninfo_unexecuted_blocks=1 00:11:31.101 00:11:31.101 ' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:31.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.101 --rc genhtml_branch_coverage=1 00:11:31.101 --rc genhtml_function_coverage=1 00:11:31.101 --rc genhtml_legend=1 00:11:31.101 --rc geninfo_all_blocks=1 00:11:31.101 --rc geninfo_unexecuted_blocks=1 00:11:31.101 00:11:31.101 ' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:31.101 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.102 09:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:33.734 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:33.734 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:33.734 Found net devices under 0000:84:00.0: cvl_0_0 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:33.734 Found net devices under 0000:84:00.1: cvl_0_1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:11:33.734 00:11:33.734 --- 10.0.0.2 ping statistics --- 00:11:33.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.734 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:11:33.734 00:11:33.734 --- 10.0.0.1 ping statistics --- 00:11:33.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.734 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1449535 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1449535 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1449535 ']' 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.734 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 [2024-10-07 09:32:28.519183] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:33.734 [2024-10-07 09:32:28.519343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.996 [2024-10-07 09:32:28.622410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.996 [2024-10-07 09:32:28.746660] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.996 [2024-10-07 09:32:28.746733] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.996 [2024-10-07 09:32:28.746749] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.996 [2024-10-07 09:32:28.746762] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.996 [2024-10-07 09:32:28.746774] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.996 [2024-10-07 09:32:28.748756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.996 [2024-10-07 09:32:28.748849] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.996 [2024-10-07 09:32:28.748916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.996 [2024-10-07 09:32:28.748920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 [2024-10-07 09:32:29.210378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 Malloc0 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.563 [2024-10-07 09:32:29.280092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1449569 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1449570 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1449573 00:11:34.563 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:34.564 { 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme$subsystem", 00:11:34.564 "trtype": "$TEST_TRANSPORT", 00:11:34.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "$NVMF_PORT", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.564 "hdgst": ${hdgst:-false}, 00:11:34.564 "ddgst": ${ddgst:-false} 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 } 00:11:34.564 EOF 00:11:34.564 )") 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:34.564 { 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme$subsystem", 00:11:34.564 "trtype": "$TEST_TRANSPORT", 00:11:34.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "$NVMF_PORT", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.564 "hdgst": ${hdgst:-false}, 00:11:34.564 "ddgst": ${ddgst:-false} 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 } 00:11:34.564 EOF 00:11:34.564 )") 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1449575 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:34.564 { 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme$subsystem", 00:11:34.564 "trtype": "$TEST_TRANSPORT", 00:11:34.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "$NVMF_PORT", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.564 "hdgst": ${hdgst:-false}, 00:11:34.564 "ddgst": ${ddgst:-false} 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 } 00:11:34.564 EOF 00:11:34.564 )") 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:34.564 { 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme$subsystem", 00:11:34.564 "trtype": "$TEST_TRANSPORT", 00:11:34.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "$NVMF_PORT", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.564 "hdgst": ${hdgst:-false}, 00:11:34.564 "ddgst": ${ddgst:-false} 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 } 00:11:34.564 EOF 00:11:34.564 )") 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1449569 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme1", 00:11:34.564 "trtype": "tcp", 00:11:34.564 "traddr": "10.0.0.2", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "4420", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.564 "hdgst": false, 00:11:34.564 "ddgst": false 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 }' 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme1", 00:11:34.564 "trtype": "tcp", 00:11:34.564 "traddr": "10.0.0.2", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "4420", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.564 "hdgst": false, 00:11:34.564 "ddgst": false 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 }' 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme1", 00:11:34.564 "trtype": "tcp", 00:11:34.564 "traddr": "10.0.0.2", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "4420", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.564 "hdgst": false, 00:11:34.564 "ddgst": false 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 }' 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:34.564 09:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:34.564 "params": { 00:11:34.564 "name": "Nvme1", 00:11:34.564 "trtype": "tcp", 00:11:34.564 "traddr": "10.0.0.2", 00:11:34.564 "adrfam": "ipv4", 00:11:34.564 "trsvcid": "4420", 00:11:34.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.564 "hdgst": false, 00:11:34.564 "ddgst": false 00:11:34.564 }, 00:11:34.564 "method": "bdev_nvme_attach_controller" 00:11:34.564 }' 00:11:34.564 [2024-10-07 09:32:29.332384] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:34.564 [2024-10-07 09:32:29.332454] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:34.564 [2024-10-07 09:32:29.336030] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:34.564 [2024-10-07 09:32:29.336030] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:34.564 [2024-10-07 09:32:29.336034] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:34.564 [2024-10-07 09:32:29.336121] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:32:29.336121] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:32:29.336122] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:34.564 --proc-type=auto ] 00:11:34.564 --proc-type=auto ] 00:11:34.822 [2024-10-07 09:32:29.483470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.822 [2024-10-07 09:32:29.576523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:11:34.822 [2024-10-07 09:32:29.596087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.081 [2024-10-07 09:32:29.701875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:35.081 [2024-10-07 09:32:29.732491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.081 [2024-10-07 09:32:29.832784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.081 [2024-10-07 09:32:29.847788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.339 [2024-10-07 09:32:29.944315] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:35.596 Running I/O for 1 seconds... 00:11:35.596 Running I/O for 1 seconds... 00:11:35.596 Running I/O for 1 seconds... 00:11:35.855 Running I/O for 1 seconds... 00:11:36.464 9972.00 IOPS, 38.95 MiB/s 00:11:36.464 Latency(us) 00:11:36.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.464 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:36.464 Nvme1n1 : 1.01 10019.28 39.14 0.00 0.00 12718.76 7087.60 21068.61 00:11:36.464 =================================================================================================================== 00:11:36.464 Total : 10019.28 39.14 0.00 0.00 12718.76 7087.60 21068.61 00:11:36.464 8252.00 IOPS, 32.23 MiB/s 00:11:36.464 Latency(us) 00:11:36.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.464 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:36.464 Nvme1n1 : 1.01 8312.01 32.47 0.00 0.00 15326.50 6941.96 25243.50 00:11:36.464 =================================================================================================================== 00:11:36.464 Total : 8312.01 32.47 0.00 0.00 15326.50 6941.96 25243.50 00:11:36.464 9302.00 IOPS, 36.34 MiB/s 00:11:36.464 Latency(us) 00:11:36.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.464 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:36.464 Nvme1n1 : 1.01 9376.07 36.63 0.00 0.00 13603.33 4684.61 24175.50 00:11:36.464 =================================================================================================================== 00:11:36.464 Total : 9376.07 36.63 0.00 0.00 13603.33 4684.61 24175.50 00:11:36.721 197920.00 IOPS, 773.12 MiB/s 00:11:36.721 Latency(us) 00:11:36.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.721 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:36.721 Nvme1n1 : 1.00 197547.72 771.67 0.00 0.00 644.46 309.48 1868.99 00:11:36.721 =================================================================================================================== 00:11:36.721 Total : 197547.72 771.67 0.00 0.00 644.46 309.48 1868.99 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1449570 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1449573 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1449575 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.978 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.235 rmmod nvme_tcp 00:11:37.235 rmmod nvme_fabrics 00:11:37.235 rmmod nvme_keyring 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1449535 ']' 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1449535 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1449535 ']' 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1449535 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1449535 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1449535' 00:11:37.235 killing process with pid 1449535 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1449535 00:11:37.235 09:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1449535 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.493 09:32:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.024 00:11:40.024 real 0m8.815s 00:11:40.024 user 0m20.497s 00:11:40.024 sys 0m4.486s 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.024 ************************************ 00:11:40.024 END TEST nvmf_bdev_io_wait 00:11:40.024 ************************************ 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.024 ************************************ 00:11:40.024 START TEST nvmf_queue_depth 00:11:40.024 ************************************ 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.024 * Looking for test storage... 00:11:40.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.024 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:40.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.025 --rc genhtml_branch_coverage=1 00:11:40.025 --rc genhtml_function_coverage=1 00:11:40.025 --rc genhtml_legend=1 00:11:40.025 --rc geninfo_all_blocks=1 00:11:40.025 --rc geninfo_unexecuted_blocks=1 00:11:40.025 00:11:40.025 ' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:40.025 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.026 09:32:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:42.556 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:42.556 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.556 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:42.557 Found net devices under 0000:84:00.0: cvl_0_0 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:42.557 Found net devices under 0000:84:00.1: cvl_0_1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:42.557 00:11:42.557 --- 10.0.0.2 ping statistics --- 00:11:42.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.557 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:42.557 00:11:42.557 --- 10.0.0.1 ping statistics --- 00:11:42.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.557 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1451952 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1451952 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1451952 ']' 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.557 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.557 [2024-10-07 09:32:37.352127] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:42.557 [2024-10-07 09:32:37.352254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.846 [2024-10-07 09:32:37.496315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.846 [2024-10-07 09:32:37.646478] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.846 [2024-10-07 09:32:37.646586] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.846 [2024-10-07 09:32:37.646622] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.846 [2024-10-07 09:32:37.646652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.846 [2024-10-07 09:32:37.646678] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.846 [2024-10-07 09:32:37.647704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 [2024-10-07 09:32:38.504105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 Malloc0 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.781 [2024-10-07 09:32:38.576914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1452105 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1452105 /var/tmp/bdevperf.sock 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1452105 ']' 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.781 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.039 [2024-10-07 09:32:38.629964] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:11:44.039 [2024-10-07 09:32:38.630040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452105 ] 00:11:44.039 [2024-10-07 09:32:38.697286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.039 [2024-10-07 09:32:38.815564] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.298 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.298 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:44.298 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:44.298 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.298 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.298 NVMe0n1 00:11:44.298 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.298 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:44.556 Running I/O for 10 seconds... 00:11:54.853 7692.00 IOPS, 30.05 MiB/s 7826.50 IOPS, 30.57 MiB/s 8003.00 IOPS, 31.26 MiB/s 8087.25 IOPS, 31.59 MiB/s 8125.00 IOPS, 31.74 MiB/s 8167.00 IOPS, 31.90 MiB/s 8202.71 IOPS, 32.04 MiB/s 8244.88 IOPS, 32.21 MiB/s 8284.11 IOPS, 32.36 MiB/s 8303.40 IOPS, 32.44 MiB/s 00:11:54.853 Latency(us) 00:11:54.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.853 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:54.853 Verification LBA range: start 0x0 length 0x4000 00:11:54.853 NVMe0n1 : 10.07 8337.35 32.57 0.00 0.00 122260.50 10291.58 78449.02 00:11:54.853 =================================================================================================================== 00:11:54.853 Total : 8337.35 32.57 0.00 0.00 122260.50 10291.58 78449.02 00:11:54.853 { 00:11:54.853 "results": [ 00:11:54.853 { 00:11:54.853 "job": "NVMe0n1", 00:11:54.853 "core_mask": "0x1", 00:11:54.853 "workload": "verify", 00:11:54.853 "status": "finished", 00:11:54.853 "verify_range": { 00:11:54.853 "start": 0, 00:11:54.853 "length": 16384 00:11:54.853 }, 00:11:54.853 "queue_depth": 1024, 00:11:54.853 "io_size": 4096, 00:11:54.853 "runtime": 10.07155, 00:11:54.853 "iops": 8337.346287314267, 00:11:54.853 "mibps": 32.567758934821356, 00:11:54.853 "io_failed": 0, 00:11:54.853 "io_timeout": 0, 00:11:54.853 "avg_latency_us": 122260.50053699953, 00:11:54.853 "min_latency_us": 10291.579259259259, 00:11:54.853 "max_latency_us": 78449.01925925926 00:11:54.853 } 00:11:54.853 ], 00:11:54.853 "core_count": 1 00:11:54.853 } 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1452105 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1452105 ']' 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1452105 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1452105 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1452105' 00:11:54.853 killing process with pid 1452105 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1452105 00:11:54.853 Received shutdown signal, test time was about 10.000000 seconds 00:11:54.853 00:11:54.853 Latency(us) 00:11:54.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.853 =================================================================================================================== 00:11:54.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:54.853 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1452105 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.113 rmmod nvme_tcp 00:11:55.113 rmmod nvme_fabrics 00:11:55.113 rmmod nvme_keyring 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1451952 ']' 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1451952 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1451952 ']' 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1451952 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1451952 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1451952' 00:11:55.113 killing process with pid 1451952 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1451952 00:11:55.113 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1451952 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.680 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.580 00:11:57.580 real 0m17.936s 00:11:57.580 user 0m24.354s 00:11:57.580 sys 0m4.005s 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:57.580 ************************************ 00:11:57.580 END TEST nvmf_queue_depth 00:11:57.580 ************************************ 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:57.580 ************************************ 00:11:57.580 START TEST nvmf_target_multipath 00:11:57.580 ************************************ 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:57.580 * Looking for test storage... 00:11:57.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:11:57.580 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:57.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.839 --rc genhtml_branch_coverage=1 00:11:57.839 --rc genhtml_function_coverage=1 00:11:57.839 --rc genhtml_legend=1 00:11:57.839 --rc geninfo_all_blocks=1 00:11:57.839 --rc geninfo_unexecuted_blocks=1 00:11:57.839 00:11:57.839 ' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:57.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.839 --rc genhtml_branch_coverage=1 00:11:57.839 --rc genhtml_function_coverage=1 00:11:57.839 --rc genhtml_legend=1 00:11:57.839 --rc geninfo_all_blocks=1 00:11:57.839 --rc geninfo_unexecuted_blocks=1 00:11:57.839 00:11:57.839 ' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:57.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.839 --rc genhtml_branch_coverage=1 00:11:57.839 --rc genhtml_function_coverage=1 00:11:57.839 --rc genhtml_legend=1 00:11:57.839 --rc geninfo_all_blocks=1 00:11:57.839 --rc geninfo_unexecuted_blocks=1 00:11:57.839 00:11:57.839 ' 00:11:57.839 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:57.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.840 --rc genhtml_branch_coverage=1 00:11:57.840 --rc genhtml_function_coverage=1 00:11:57.840 --rc genhtml_legend=1 00:11:57.840 --rc geninfo_all_blocks=1 00:11:57.840 --rc geninfo_unexecuted_blocks=1 00:11:57.840 00:11:57.840 ' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.840 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:00.371 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:00.371 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:00.371 Found net devices under 0000:84:00.0: cvl_0_0 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.371 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:00.372 Found net devices under 0000:84:00.1: cvl_0_1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.372 09:32:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:12:00.372 00:12:00.372 --- 10.0.0.2 ping statistics --- 00:12:00.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.372 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:00.372 00:12:00.372 --- 10.0.0.1 ping statistics --- 00:12:00.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.372 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:00.372 only one NIC for nvmf test 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.372 rmmod nvme_tcp 00:12:00.372 rmmod nvme_fabrics 00:12:00.372 rmmod nvme_keyring 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.372 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.907 00:12:02.907 real 0m4.853s 00:12:02.907 user 0m0.885s 00:12:02.907 sys 0m1.957s 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:02.907 ************************************ 00:12:02.907 END TEST nvmf_target_multipath 00:12:02.907 ************************************ 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.907 ************************************ 00:12:02.907 START TEST nvmf_zcopy 00:12:02.907 ************************************ 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:02.907 * Looking for test storage... 00:12:02.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.907 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.908 --rc genhtml_branch_coverage=1 00:12:02.908 --rc genhtml_function_coverage=1 00:12:02.908 --rc genhtml_legend=1 00:12:02.908 --rc geninfo_all_blocks=1 00:12:02.908 --rc geninfo_unexecuted_blocks=1 00:12:02.908 00:12:02.908 ' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.908 --rc genhtml_branch_coverage=1 00:12:02.908 --rc genhtml_function_coverage=1 00:12:02.908 --rc genhtml_legend=1 00:12:02.908 --rc geninfo_all_blocks=1 00:12:02.908 --rc geninfo_unexecuted_blocks=1 00:12:02.908 00:12:02.908 ' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.908 --rc genhtml_branch_coverage=1 00:12:02.908 --rc genhtml_function_coverage=1 00:12:02.908 --rc genhtml_legend=1 00:12:02.908 --rc geninfo_all_blocks=1 00:12:02.908 --rc geninfo_unexecuted_blocks=1 00:12:02.908 00:12:02.908 ' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.908 --rc genhtml_branch_coverage=1 00:12:02.908 --rc genhtml_function_coverage=1 00:12:02.908 --rc genhtml_legend=1 00:12:02.908 --rc geninfo_all_blocks=1 00:12:02.908 --rc geninfo_unexecuted_blocks=1 00:12:02.908 00:12:02.908 ' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.908 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.909 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.539 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:05.540 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:05.540 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:05.540 Found net devices under 0000:84:00.0: cvl_0_0 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:05.540 Found net devices under 0000:84:00.1: cvl_0_1 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.540 09:32:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:12:05.540 00:12:05.540 --- 10.0.0.2 ping statistics --- 00:12:05.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.540 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:05.540 00:12:05.540 --- 10.0.0.1 ping statistics --- 00:12:05.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.540 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1457496 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1457496 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1457496 ']' 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.540 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.541 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.541 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.541 [2024-10-07 09:33:00.189068] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:12:05.541 [2024-10-07 09:33:00.189164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.541 [2024-10-07 09:33:00.264010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.800 [2024-10-07 09:33:00.382086] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.800 [2024-10-07 09:33:00.382165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.800 [2024-10-07 09:33:00.382179] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.800 [2024-10-07 09:33:00.382190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.800 [2024-10-07 09:33:00.382199] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.800 [2024-10-07 09:33:00.382830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 [2024-10-07 09:33:00.581078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 [2024-10-07 09:33:00.597292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.800 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.059 malloc0 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:06.059 { 00:12:06.059 "params": { 00:12:06.059 "name": "Nvme$subsystem", 00:12:06.059 "trtype": "$TEST_TRANSPORT", 00:12:06.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.059 "adrfam": "ipv4", 00:12:06.059 "trsvcid": "$NVMF_PORT", 00:12:06.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.059 "hdgst": ${hdgst:-false}, 00:12:06.059 "ddgst": ${ddgst:-false} 00:12:06.059 }, 00:12:06.059 "method": "bdev_nvme_attach_controller" 00:12:06.059 } 00:12:06.059 EOF 00:12:06.059 )") 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:06.059 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:06.059 "params": { 00:12:06.059 "name": "Nvme1", 00:12:06.059 "trtype": "tcp", 00:12:06.059 "traddr": "10.0.0.2", 00:12:06.059 "adrfam": "ipv4", 00:12:06.059 "trsvcid": "4420", 00:12:06.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.059 "hdgst": false, 00:12:06.059 "ddgst": false 00:12:06.059 }, 00:12:06.059 "method": "bdev_nvme_attach_controller" 00:12:06.059 }' 00:12:06.059 [2024-10-07 09:33:00.711512] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:12:06.059 [2024-10-07 09:33:00.711594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457551 ] 00:12:06.059 [2024-10-07 09:33:00.775577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.318 [2024-10-07 09:33:00.900872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.318 Running I/O for 10 seconds... 00:12:16.538 5428.00 IOPS, 42.41 MiB/s 5483.00 IOPS, 42.84 MiB/s 5525.00 IOPS, 43.16 MiB/s 5534.25 IOPS, 43.24 MiB/s 5548.40 IOPS, 43.35 MiB/s 5546.17 IOPS, 43.33 MiB/s 5548.29 IOPS, 43.35 MiB/s 5554.50 IOPS, 43.39 MiB/s 5551.89 IOPS, 43.37 MiB/s 5553.20 IOPS, 43.38 MiB/s 00:12:16.538 Latency(us) 00:12:16.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:16.538 Verification LBA range: start 0x0 length 0x1000 00:12:16.538 Nvme1n1 : 10.02 5555.78 43.40 0.00 0.00 22976.62 3276.80 31845.64 00:12:16.538 =================================================================================================================== 00:12:16.538 Total : 5555.78 43.40 0.00 0.00 22976.62 3276.80 31845.64 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1459433 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:16.796 { 00:12:16.796 "params": { 00:12:16.796 "name": "Nvme$subsystem", 00:12:16.796 "trtype": "$TEST_TRANSPORT", 00:12:16.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.796 "adrfam": "ipv4", 00:12:16.796 "trsvcid": "$NVMF_PORT", 00:12:16.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.796 "hdgst": ${hdgst:-false}, 00:12:16.796 "ddgst": ${ddgst:-false} 00:12:16.796 }, 00:12:16.796 "method": "bdev_nvme_attach_controller" 00:12:16.796 } 00:12:16.796 EOF 00:12:16.796 )") 00:12:16.796 [2024-10-07 09:33:11.463751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:16.796 [2024-10-07 09:33:11.463843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:16.796 09:33:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:16.796 "params": { 00:12:16.796 "name": "Nvme1", 00:12:16.796 "trtype": "tcp", 00:12:16.796 "traddr": "10.0.0.2", 00:12:16.796 "adrfam": "ipv4", 00:12:16.796 "trsvcid": "4420", 00:12:16.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.796 "hdgst": false, 00:12:16.796 "ddgst": false 00:12:16.796 }, 00:12:16.796 "method": "bdev_nvme_attach_controller" 00:12:16.796 }' 00:12:16.796 [2024-10-07 09:33:11.471675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.471739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.479660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.479703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.487746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.487803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.499794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.499852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.511835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.511908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.519862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.519944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.531918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.531976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.537215] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:12:16.796 [2024-10-07 09:33:11.537337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459433 ] 00:12:16.796 [2024-10-07 09:33:11.543952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.543977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.555971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.555996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.568024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.568049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.579997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.796 [2024-10-07 09:33:11.580022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.796 [2024-10-07 09:33:11.588013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.797 [2024-10-07 09:33:11.588038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.797 [2024-10-07 09:33:11.600005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.797 [2024-10-07 09:33:11.600027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.797 [2024-10-07 09:33:11.608011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.797 [2024-10-07 09:33:11.608032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.616036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.616058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.624069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.624090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.627535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.055 [2024-10-07 09:33:11.632102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.632128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.640143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.640193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.648134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.648158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.656155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.656192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.664188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.664210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.672207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.672229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.680226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.680264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.688266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.688291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.696302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.696335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.055 [2024-10-07 09:33:11.704340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.055 [2024-10-07 09:33:11.704379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.712334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.712359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.720356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.720380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.728379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.728401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.736399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.736424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.744421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.744445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.752443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.752467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.753040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.056 [2024-10-07 09:33:11.760463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.760487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.768500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.768530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.776537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.776573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.784563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.784603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.792584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.792623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.800608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.800648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.808630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.808669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.816652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.816691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.824647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.824673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.832686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.832723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.840718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.840756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.848737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.848777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.856734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.856759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.056 [2024-10-07 09:33:11.864753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.056 [2024-10-07 09:33:11.864777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.872774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.872799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.880807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.880838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.888829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.888857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.896854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.896882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.904878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.904915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.912907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.912948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.920946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.920988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.928961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.928983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.936978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.936999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.945001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.945027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 Running I/O for 5 seconds... 00:12:17.315 [2024-10-07 09:33:11.953012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.953034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.967860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.967901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.979843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.979875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:11.992193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:11.992227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.003905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.003948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.015776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.015807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.027982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.028010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.040109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.040135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.052393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.052424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.064226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.064270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.076333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.076364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.088502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.088534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.100520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.100552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.112183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.112209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.315 [2024-10-07 09:33:12.124125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.315 [2024-10-07 09:33:12.124152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.136093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.136138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.148233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.148280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.160287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.160319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.171865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.171908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.183878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.183919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.195829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.195861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.207633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.207674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.219355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.219383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.231043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.231070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.243347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.243380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.255029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.255057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.267032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.267060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.279036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.279063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.290765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.290804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.302502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.302534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.316092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.316118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.327285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.327317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.338954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.338981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.350375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.350406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.361913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.361968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.373884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.373943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.574 [2024-10-07 09:33:12.386102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.574 [2024-10-07 09:33:12.386144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.398007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.398034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.409745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.409776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.421735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.421766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.433791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.433821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.445322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.445353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.457267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.457299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.468761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.468793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.480791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.480823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.492261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.492292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.504150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.504195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.515992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.516019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.527792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.527823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.539573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.539605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.552059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.552086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.831 [2024-10-07 09:33:12.563857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.831 [2024-10-07 09:33:12.563888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.575962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.575989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.588006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.588033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.600187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.600219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.611632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.611663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.623403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.623435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.635145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.635189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.832 [2024-10-07 09:33:12.647072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.832 [2024-10-07 09:33:12.647100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.659058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.659085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.670828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.670860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.682996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.683024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.694870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.694911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.706997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.707024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.721604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.721636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.732436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.732467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.744571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.744602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.757019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.757046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.768629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.768659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.781644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.781675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.793996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.794023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.806579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.806610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.818990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.819015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.831007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.831033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.842917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.842961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.854466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.854497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.866139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.866165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.877642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.877672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.889832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.889863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.089 [2024-10-07 09:33:12.901755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.089 [2024-10-07 09:33:12.901786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.914088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.914115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.926049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.926077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.937946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.937972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.950038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.950064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 10573.00 IOPS, 82.60 MiB/s [2024-10-07 09:33:12.961877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.961919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.974005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.974031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.986072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.986099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:12.998208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.348 [2024-10-07 09:33:12.998248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.348 [2024-10-07 09:33:13.010279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.010311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.021733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.021764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.033690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.033721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.045768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.045799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.057651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.057682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.069675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.069705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.081851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.081882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.093851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.093882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.105795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.105825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.117205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.117237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.129405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.129436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.141319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.141351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.349 [2024-10-07 09:33:13.154865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.349 [2024-10-07 09:33:13.154903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.166857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.166887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.179132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.179159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.190734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.190765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.202028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.202054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.213665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.213696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.225735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.225767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.237742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.237773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.250022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.250048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.262227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.262281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.274295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.607 [2024-10-07 09:33:13.274328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.607 [2024-10-07 09:33:13.286456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.286490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.298457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.298490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.310846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.310878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.322767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.322798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.334121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.334149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.346215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.346256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.357751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.357783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.371044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.371071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.381257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.381291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.393762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.393794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.405888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.405947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.608 [2024-10-07 09:33:13.418008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.608 [2024-10-07 09:33:13.418035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.429762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.429794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.441469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.441501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.453679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.453711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.465200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.465232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.476835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.476866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.488663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.488703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.501280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.501312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.513594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.513626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.525257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.525284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.536440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.536471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.548522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.548554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.560832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.560864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.573362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.573395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.585408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.585439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.597139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.597180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.609001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.609028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.620615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.620646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.632567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.632598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.644432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.644462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.656212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.656256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.667931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.667976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.867 [2024-10-07 09:33:13.679241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.867 [2024-10-07 09:33:13.679273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.691195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.691222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.702836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.702868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.714888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.714947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.726847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.726877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.738713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.738744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.750429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.750461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.763638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.763669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.774638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.774668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.785977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.786005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.797465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.797496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.808976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.809003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.820558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.820591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.832195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.832221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.844183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.844210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.856185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.856216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.867721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.867752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.879096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.879122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.891192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.891218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.902869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.902911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.914384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.914415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.926014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.926041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.126 [2024-10-07 09:33:13.937653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.126 [2024-10-07 09:33:13.937693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:13.950115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:13.950141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 10661.00 IOPS, 83.29 MiB/s [2024-10-07 09:33:13.961827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:13.961858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:13.973565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:13.973597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:13.985688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:13.985719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:13.997768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:13.997799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.009589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.009620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.021309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.021340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.033074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.033100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.047068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.047095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.058564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.058595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.070216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.070259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.082261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.082293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.093994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.094020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.105957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.105984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.118387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.118417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.130503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.130534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.142456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.142486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.154963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.154989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.166947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.166974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.178914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.178959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.384 [2024-10-07 09:33:14.190570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.384 [2024-10-07 09:33:14.190601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.202676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.202708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.214439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.214470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.226408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.226439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.238203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.238234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.249912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.249955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.262192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.262218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.274485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.274516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.286779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.643 [2024-10-07 09:33:14.286810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.643 [2024-10-07 09:33:14.298465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.298497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.310578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.310609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.322343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.322375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.334831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.334862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.346628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.346659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.358383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.358414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.370339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.370371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.381974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.382001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.395313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.395345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.406163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.406202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.418529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.418562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.430454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.430486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.443057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.443085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.644 [2024-10-07 09:33:14.455446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.644 [2024-10-07 09:33:14.455478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.467583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.902 [2024-10-07 09:33:14.467614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.479630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.902 [2024-10-07 09:33:14.479662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.491462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.902 [2024-10-07 09:33:14.491494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.503021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.902 [2024-10-07 09:33:14.503049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.514938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.902 [2024-10-07 09:33:14.514965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.902 [2024-10-07 09:33:14.526799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.526831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.538286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.538317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.549849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.549880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.561708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.561740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.573511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.573544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.585531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.585563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.597048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.597076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.608962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.609003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.620229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.620276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.631987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.632015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.645542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.645574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.656857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.656888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.668109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.668136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.679868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.679909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.691516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.691548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.703727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.703759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.903 [2024-10-07 09:33:14.716080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.903 [2024-10-07 09:33:14.716108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.161 [2024-10-07 09:33:14.728548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.161 [2024-10-07 09:33:14.728580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.161 [2024-10-07 09:33:14.740475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.161 [2024-10-07 09:33:14.740506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.161 [2024-10-07 09:33:14.754107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.161 [2024-10-07 09:33:14.754134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.161 [2024-10-07 09:33:14.765470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.161 [2024-10-07 09:33:14.765501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.777512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.777543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.789746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.789778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.801866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.801907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.814093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.814120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.826289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.826317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.837624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.837664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.849650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.849681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.861348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.861380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.873588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.873619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.885754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.885786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.898147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.898188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.910471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.910502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.922617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.922647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.934982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.935009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.947095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.947123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 [2024-10-07 09:33:14.959920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.959962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.162 10655.33 IOPS, 83.24 MiB/s [2024-10-07 09:33:14.972274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.162 [2024-10-07 09:33:14.972305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:14.983753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.472 [2024-10-07 09:33:14.983785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:14.996504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.472 [2024-10-07 09:33:14.996535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:15.008962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.472 [2024-10-07 09:33:15.008990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:15.020969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.472 [2024-10-07 09:33:15.020998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:15.032731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.472 [2024-10-07 09:33:15.032762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.472 [2024-10-07 09:33:15.046730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.046761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.057320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.057351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.069619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.069660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.082021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.082048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.094072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.094099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.105803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.105829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.118819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.118845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.129309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.129336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.141651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.141684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.154141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.154182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.166666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.166698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.179013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.179041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.190747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.190778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.202314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.202345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.213948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.213975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.225635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.225667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.237956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.237984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.249748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.249778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.261708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.261738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.273884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.273937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.473 [2024-10-07 09:33:15.286615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.473 [2024-10-07 09:33:15.286647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.298977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.299004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.311082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.311110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.322801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.322833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.334559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.334589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.346546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.346577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.358792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.358824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.732 [2024-10-07 09:33:15.370650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.732 [2024-10-07 09:33:15.370683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.382516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.382547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.393990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.394016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.405834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.405865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.418141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.418181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.430066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.430093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.442148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.442188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.454045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.454072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.465994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.466021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.478197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.478223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.490105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.490131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.502750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.502780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.514962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.514988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.527061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.527088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.733 [2024-10-07 09:33:15.539260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.733 [2024-10-07 09:33:15.539292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.991 [2024-10-07 09:33:15.551538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.991 [2024-10-07 09:33:15.551570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.991 [2024-10-07 09:33:15.563943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.991 [2024-10-07 09:33:15.563969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.991 [2024-10-07 09:33:15.575952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.575979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.587667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.587697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.599878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.599940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.611713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.611743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.623683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.623715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.635816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.635848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.647976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.648004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.661843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.661875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.673343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.673375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.685420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.685453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.697166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.697208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.709133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.709160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.720817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.720849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.732833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.732865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.744313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.744345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.756607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.756640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.768560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.768592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.780785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.780816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.792771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.792802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.992 [2024-10-07 09:33:15.804952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.992 [2024-10-07 09:33:15.804981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.816806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.816838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.828789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.828820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.841350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.841382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.853539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.853571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.865873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.865915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.878097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.878125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.890656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.890688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.902792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.902823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.914653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.914684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.926656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.926686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.938746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.938777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.950608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.950638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 [2024-10-07 09:33:15.962559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.962590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.251 10637.25 IOPS, 83.10 MiB/s [2024-10-07 09:33:15.976256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.251 [2024-10-07 09:33:15.976299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:15.987027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:15.987053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:15.999695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:15.999725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:16.011952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:16.011978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:16.023836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:16.023866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:16.036109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:16.036136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:16.048383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:16.048414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.252 [2024-10-07 09:33:16.060028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.252 [2024-10-07 09:33:16.060055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.071973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.072000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.083888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.083943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.095787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.095818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.107650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.107681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.119722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.119753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.131749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.131780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.143724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.143755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.155978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.156007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.168074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.168102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.179828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.179859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.191821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.191853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.203373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.203412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.215636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.215667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.227492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.227523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.239389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.239420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.251358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.251389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.263580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.263610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.276968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.276995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.287409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.287440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.299581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.299611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.311264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.311295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.510 [2024-10-07 09:33:16.323267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.510 [2024-10-07 09:33:16.323298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.335483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.335513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.347569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.347599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.359339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.359370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.370999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.371025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.382507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.382539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.394206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.394245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.405388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.405419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.417318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.417349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.429012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.429045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.443193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.443220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.454672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.454704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.466372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.466403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.478216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.478258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.490525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.490556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.502566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.502597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.514658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.514688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.526467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.526497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.538071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.538098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.550136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.550163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.561979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.562021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.769 [2024-10-07 09:33:16.574510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.769 [2024-10-07 09:33:16.574542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.027 [2024-10-07 09:33:16.586963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.027 [2024-10-07 09:33:16.586991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.027 [2024-10-07 09:33:16.598756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.027 [2024-10-07 09:33:16.598788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.027 [2024-10-07 09:33:16.610786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.027 [2024-10-07 09:33:16.610817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.027 [2024-10-07 09:33:16.622222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.027 [2024-10-07 09:33:16.622265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.634376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.634407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.646021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.646047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.658400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.658440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.671829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.671860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.682830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.682860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.694434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.694465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.706565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.706596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.718800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.718831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.730945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.730971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.742734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.742765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.754603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.754634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.767055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.767083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.778942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.778970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.791263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.791294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.802683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.802714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.814304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.814338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.826671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.826704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.028 [2024-10-07 09:33:16.838298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.028 [2024-10-07 09:33:16.838331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.851157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.851204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.862720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.862751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.874156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.874200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.885639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.885671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.898020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.898047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.909909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.909951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.922249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.922282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.933968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.933996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.945879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.945935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.957759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.957790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 10647.80 IOPS, 83.19 MiB/s [2024-10-07 09:33:16.969211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.969253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 [2024-10-07 09:33:16.976686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.285 [2024-10-07 09:33:16.976716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.285 00:12:22.285 Latency(us) 00:12:22.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.286 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:22.286 Nvme1n1 : 5.01 10649.10 83.20 0.00 0.00 12002.49 5461.33 21651.15 00:12:22.286 =================================================================================================================== 00:12:22.286 Total : 10649.10 83.20 0.00 0.00 12002.49 5461.33 21651.15 00:12:22.286 [2024-10-07 09:33:16.984708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:16.984737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:16.992730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:16.992758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.000738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.000761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.008823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.008874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.016841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.016898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.024867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.024926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.032887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.032943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.040939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.040986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.048951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.049000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.056956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.057003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.064992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.065044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.073014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.073063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.081038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.081089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.093081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.093144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.286 [2024-10-07 09:33:17.101074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.286 [2024-10-07 09:33:17.101123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.109102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.109152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.117111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.117159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.125098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.125129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.133102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.133125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.141124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.141146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.149146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.149182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.157169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.157190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.165242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.165290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.173286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.173336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.181299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.181341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.189267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.189305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.197288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.197308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.205322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.205347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.213337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.213361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.221405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.221448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.229435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.229483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.237443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.237484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.245429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.245453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.253452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.253477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 [2024-10-07 09:33:17.261489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.544 [2024-10-07 09:33:17.261544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1459433) - No such process 00:12:22.544 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1459433 00:12:22.544 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.544 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.545 delay0 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.545 09:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:22.803 [2024-10-07 09:33:17.410059] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:29.365 Initializing NVMe Controllers 00:12:29.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:29.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:29.365 Initialization complete. Launching workers. 00:12:29.365 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1094 00:12:29.365 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1381, failed to submit 33 00:12:29.365 success 1244, unsuccessful 137, failed 0 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.365 rmmod nvme_tcp 00:12:29.365 rmmod nvme_fabrics 00:12:29.365 rmmod nvme_keyring 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1457496 ']' 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1457496 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1457496 ']' 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1457496 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457496 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457496' 00:12:29.365 killing process with pid 1457496 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1457496 00:12:29.365 09:33:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1457496 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.365 09:33:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.901 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.901 00:12:31.901 real 0m28.930s 00:12:31.901 user 0m41.343s 00:12:31.901 sys 0m9.451s 00:12:31.901 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.901 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.901 ************************************ 00:12:31.901 END TEST nvmf_zcopy 00:12:31.901 ************************************ 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.902 ************************************ 00:12:31.902 START TEST nvmf_nmic 00:12:31.902 ************************************ 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:31.902 * Looking for test storage... 00:12:31.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:31.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.902 --rc genhtml_branch_coverage=1 00:12:31.902 --rc genhtml_function_coverage=1 00:12:31.902 --rc genhtml_legend=1 00:12:31.902 --rc geninfo_all_blocks=1 00:12:31.902 --rc geninfo_unexecuted_blocks=1 00:12:31.902 00:12:31.902 ' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:31.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.902 --rc genhtml_branch_coverage=1 00:12:31.902 --rc genhtml_function_coverage=1 00:12:31.902 --rc genhtml_legend=1 00:12:31.902 --rc geninfo_all_blocks=1 00:12:31.902 --rc geninfo_unexecuted_blocks=1 00:12:31.902 00:12:31.902 ' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:31.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.902 --rc genhtml_branch_coverage=1 00:12:31.902 --rc genhtml_function_coverage=1 00:12:31.902 --rc genhtml_legend=1 00:12:31.902 --rc geninfo_all_blocks=1 00:12:31.902 --rc geninfo_unexecuted_blocks=1 00:12:31.902 00:12:31.902 ' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:31.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.902 --rc genhtml_branch_coverage=1 00:12:31.902 --rc genhtml_function_coverage=1 00:12:31.902 --rc genhtml_legend=1 00:12:31.902 --rc geninfo_all_blocks=1 00:12:31.902 --rc geninfo_unexecuted_blocks=1 00:12:31.902 00:12:31.902 ' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.902 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.903 09:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:34.435 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:34.435 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:34.435 Found net devices under 0000:84:00.0: cvl_0_0 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:34.435 Found net devices under 0000:84:00.1: cvl_0_1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:12:34.435 00:12:34.435 --- 10.0.0.2 ping statistics --- 00:12:34.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.435 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:34.435 00:12:34.435 --- 10.0.0.1 ping statistics --- 00:12:34.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.435 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.435 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:34.436 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1462862 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1462862 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1462862 ']' 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.725 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 [2024-10-07 09:33:29.337985] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:12:34.725 [2024-10-07 09:33:29.338090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.725 [2024-10-07 09:33:29.421769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.984 [2024-10-07 09:33:29.543507] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.984 [2024-10-07 09:33:29.543552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.984 [2024-10-07 09:33:29.543580] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.984 [2024-10-07 09:33:29.543592] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.985 [2024-10-07 09:33:29.543603] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.985 [2024-10-07 09:33:29.545524] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.985 [2024-10-07 09:33:29.545607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.985 [2024-10-07 09:33:29.545679] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.985 [2024-10-07 09:33:29.545682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-10-07 09:33:29.713909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 Malloc0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-10-07 09:33:29.767306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:34.985 test case1: single bdev can't be used in multiple subsystems 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-10-07 09:33:29.791146] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:34.985 [2024-10-07 09:33:29.791191] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:34.985 [2024-10-07 09:33:29.791208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.985 request: 00:12:34.985 { 00:12:34.985 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:34.985 "namespace": { 00:12:34.985 "bdev_name": "Malloc0", 00:12:34.985 "no_auto_visible": false 00:12:34.985 }, 00:12:34.985 "method": "nvmf_subsystem_add_ns", 00:12:34.985 "req_id": 1 00:12:34.985 } 00:12:34.985 Got JSON-RPC error response 00:12:34.985 response: 00:12:34.985 { 00:12:34.985 "code": -32602, 00:12:34.985 "message": "Invalid parameters" 00:12:34.985 } 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:34.985 Adding namespace failed - expected result. 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:34.985 test case2: host connect to nvmf target in multiple paths 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.243 [2024-10-07 09:33:29.803310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:35.243 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.243 09:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.809 09:33:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:36.376 09:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.376 09:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.376 09:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.376 09:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:36.376 09:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:38.901 09:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:38.901 [global] 00:12:38.901 thread=1 00:12:38.901 invalidate=1 00:12:38.901 rw=write 00:12:38.901 time_based=1 00:12:38.901 runtime=1 00:12:38.901 ioengine=libaio 00:12:38.901 direct=1 00:12:38.901 bs=4096 00:12:38.901 iodepth=1 00:12:38.901 norandommap=0 00:12:38.901 numjobs=1 00:12:38.901 00:12:38.901 verify_dump=1 00:12:38.901 verify_backlog=512 00:12:38.901 verify_state_save=0 00:12:38.901 do_verify=1 00:12:38.901 verify=crc32c-intel 00:12:38.901 [job0] 00:12:38.901 filename=/dev/nvme0n1 00:12:38.901 Could not set queue depth (nvme0n1) 00:12:38.901 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.901 fio-3.35 00:12:38.901 Starting 1 thread 00:12:39.835 00:12:39.835 job0: (groupid=0, jobs=1): err= 0: pid=1463511: Mon Oct 7 09:33:34 2024 00:12:39.835 read: IOPS=1902, BW=7608KiB/s (7791kB/s)(7616KiB/1001msec) 00:12:39.835 slat (nsec): min=5010, max=80088, avg=15922.57, stdev=8689.74 00:12:39.835 clat (usec): min=203, max=655, avg=296.55, stdev=82.67 00:12:39.835 lat (usec): min=214, max=678, avg=312.47, stdev=86.59 00:12:39.835 clat percentiles (usec): 00:12:39.835 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 239], 00:12:39.835 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 289], 00:12:39.835 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 392], 95.00th=[ 482], 00:12:39.835 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 660], 00:12:39.835 | 99.99th=[ 660] 00:12:39.835 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:39.835 slat (nsec): min=6707, max=59973, avg=11657.57, stdev=4336.61 00:12:39.835 clat (usec): min=122, max=407, avg=178.21, stdev=30.92 00:12:39.835 lat (usec): min=130, max=467, avg=189.87, stdev=32.51 00:12:39.835 clat percentiles (usec): 00:12:39.835 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:12:39.835 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 178], 00:12:39.835 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 233], 00:12:39.835 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 351], 99.95th=[ 355], 00:12:39.835 | 99.99th=[ 408] 00:12:39.835 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:12:39.835 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:39.835 lat (usec) : 250=65.49%, 500=32.29%, 750=2.23% 00:12:39.835 cpu : usr=3.20%, sys=6.30%, ctx=3952, majf=0, minf=1 00:12:39.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.835 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.835 00:12:39.835 Run status group 0 (all jobs): 00:12:39.835 READ: bw=7608KiB/s (7791kB/s), 7608KiB/s-7608KiB/s (7791kB/s-7791kB/s), io=7616KiB (7799kB), run=1001-1001msec 00:12:39.835 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:12:39.835 00:12:39.835 Disk stats (read/write): 00:12:39.835 nvme0n1: ios=1618/2048, merge=0/0, ticks=490/364, in_queue=854, util=91.28% 00:12:39.835 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.093 rmmod nvme_tcp 00:12:40.093 rmmod nvme_fabrics 00:12:40.093 rmmod nvme_keyring 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1462862 ']' 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1462862 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1462862 ']' 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1462862 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462862 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462862' 00:12:40.093 killing process with pid 1462862 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1462862 00:12:40.093 09:33:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1462862 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.660 09:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.564 00:12:42.564 real 0m11.011s 00:12:42.564 user 0m23.594s 00:12:42.564 sys 0m3.148s 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:42.564 ************************************ 00:12:42.564 END TEST nvmf_nmic 00:12:42.564 ************************************ 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:42.564 ************************************ 00:12:42.564 START TEST nvmf_fio_target 00:12:42.564 ************************************ 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:42.564 * Looking for test storage... 00:12:42.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.564 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:42.822 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:42.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.823 --rc genhtml_branch_coverage=1 00:12:42.823 --rc genhtml_function_coverage=1 00:12:42.823 --rc genhtml_legend=1 00:12:42.823 --rc geninfo_all_blocks=1 00:12:42.823 --rc geninfo_unexecuted_blocks=1 00:12:42.823 00:12:42.823 ' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:42.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.823 --rc genhtml_branch_coverage=1 00:12:42.823 --rc genhtml_function_coverage=1 00:12:42.823 --rc genhtml_legend=1 00:12:42.823 --rc geninfo_all_blocks=1 00:12:42.823 --rc geninfo_unexecuted_blocks=1 00:12:42.823 00:12:42.823 ' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:42.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.823 --rc genhtml_branch_coverage=1 00:12:42.823 --rc genhtml_function_coverage=1 00:12:42.823 --rc genhtml_legend=1 00:12:42.823 --rc geninfo_all_blocks=1 00:12:42.823 --rc geninfo_unexecuted_blocks=1 00:12:42.823 00:12:42.823 ' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:42.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.823 --rc genhtml_branch_coverage=1 00:12:42.823 --rc genhtml_function_coverage=1 00:12:42.823 --rc genhtml_legend=1 00:12:42.823 --rc geninfo_all_blocks=1 00:12:42.823 --rc geninfo_unexecuted_blocks=1 00:12:42.823 00:12:42.823 ' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.823 09:33:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.354 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:45.355 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:45.355 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:45.355 Found net devices under 0000:84:00.0: cvl_0_0 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:45.355 Found net devices under 0000:84:00.1: cvl_0_1 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.355 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:12:45.613 00:12:45.613 --- 10.0.0.2 ping statistics --- 00:12:45.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.613 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:12:45.613 00:12:45.613 --- 10.0.0.1 ping statistics --- 00:12:45.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.613 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1465740 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1465740 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1465740 ']' 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.613 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.614 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.614 [2024-10-07 09:33:40.380650] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:12:45.614 [2024-10-07 09:33:40.380744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.871 [2024-10-07 09:33:40.458126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.871 [2024-10-07 09:33:40.582260] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.871 [2024-10-07 09:33:40.582318] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.871 [2024-10-07 09:33:40.582348] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.871 [2024-10-07 09:33:40.582360] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.871 [2024-10-07 09:33:40.582370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.871 [2024-10-07 09:33:40.584339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.871 [2024-10-07 09:33:40.584406] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.871 [2024-10-07 09:33:40.584426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.871 [2024-10-07 09:33:40.584430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.128 09:33:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:46.384 [2024-10-07 09:33:41.108967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.384 09:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:46.975 09:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:46.975 09:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:47.538 09:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:47.538 09:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.103 09:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:48.103 09:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.667 09:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:48.667 09:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:49.233 09:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.798 09:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:49.799 09:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.365 09:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:50.365 09:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.623 09:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:50.623 09:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:51.189 09:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.756 09:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:51.756 09:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.322 09:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:52.322 09:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.580 09:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.146 [2024-10-07 09:33:47.731075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.146 09:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:53.403 09:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:53.968 09:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:54.533 09:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:57.082 09:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:57.082 [global] 00:12:57.082 thread=1 00:12:57.082 invalidate=1 00:12:57.082 rw=write 00:12:57.082 time_based=1 00:12:57.082 runtime=1 00:12:57.082 ioengine=libaio 00:12:57.082 direct=1 00:12:57.082 bs=4096 00:12:57.082 iodepth=1 00:12:57.082 norandommap=0 00:12:57.082 numjobs=1 00:12:57.082 00:12:57.082 verify_dump=1 00:12:57.082 verify_backlog=512 00:12:57.082 verify_state_save=0 00:12:57.082 do_verify=1 00:12:57.082 verify=crc32c-intel 00:12:57.082 [job0] 00:12:57.082 filename=/dev/nvme0n1 00:12:57.082 [job1] 00:12:57.082 filename=/dev/nvme0n2 00:12:57.082 [job2] 00:12:57.082 filename=/dev/nvme0n3 00:12:57.082 [job3] 00:12:57.082 filename=/dev/nvme0n4 00:12:57.082 Could not set queue depth (nvme0n1) 00:12:57.082 Could not set queue depth (nvme0n2) 00:12:57.082 Could not set queue depth (nvme0n3) 00:12:57.082 Could not set queue depth (nvme0n4) 00:12:57.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.082 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.082 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.082 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.082 fio-3.35 00:12:57.082 Starting 4 threads 00:12:58.457 00:12:58.457 job0: (groupid=0, jobs=1): err= 0: pid=1467099: Mon Oct 7 09:33:52 2024 00:12:58.457 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:12:58.457 slat (nsec): min=10097, max=18890, avg=16890.95, stdev=1874.21 00:12:58.457 clat (usec): min=40905, max=41907, avg=41031.88, stdev=202.32 00:12:58.457 lat (usec): min=40921, max=41923, avg=41048.78, stdev=202.10 00:12:58.457 clat percentiles (usec): 00:12:58.457 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:58.457 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:58.457 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:58.457 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:12:58.457 | 99.99th=[41681] 00:12:58.457 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:12:58.457 slat (nsec): min=8684, max=53312, avg=13088.92, stdev=5784.06 00:12:58.457 clat (usec): min=148, max=405, avg=219.23, stdev=40.15 00:12:58.457 lat (usec): min=161, max=417, avg=232.32, stdev=40.36 00:12:58.457 clat percentiles (usec): 00:12:58.457 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 184], 00:12:58.457 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 227], 00:12:58.457 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 297], 00:12:58.457 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 408], 99.95th=[ 408], 00:12:58.457 | 99.99th=[ 408] 00:12:58.457 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:12:58.457 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:58.457 lat (usec) : 250=81.46%, 500=14.42% 00:12:58.457 lat (msec) : 50=4.12% 00:12:58.457 cpu : usr=0.29%, sys=0.98%, ctx=534, majf=0, minf=1 00:12:58.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.457 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.457 job1: (groupid=0, jobs=1): err= 0: pid=1467100: Mon Oct 7 09:33:52 2024 00:12:58.457 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:12:58.457 slat (nsec): min=9273, max=29333, avg=18879.24, stdev=3445.16 00:12:58.457 clat (usec): min=40667, max=41095, avg=40959.07, stdev=86.22 00:12:58.457 lat (usec): min=40676, max=41113, avg=40977.95, stdev=87.88 00:12:58.457 clat percentiles (usec): 00:12:58.457 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:58.457 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:58.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:58.458 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:58.458 | 99.99th=[41157] 00:12:58.458 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:12:58.458 slat (usec): min=7, max=40787, avg=122.38, stdev=1935.15 00:12:58.458 clat (usec): min=142, max=439, avg=198.77, stdev=38.06 00:12:58.458 lat (usec): min=152, max=40991, avg=321.15, stdev=1935.87 00:12:58.458 clat percentiles (usec): 00:12:58.458 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:12:58.458 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 198], 60.00th=[ 208], 00:12:58.458 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 255], 00:12:58.458 | 99.00th=[ 314], 99.50th=[ 424], 99.90th=[ 441], 99.95th=[ 441], 00:12:58.458 | 99.99th=[ 441] 00:12:58.458 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:12:58.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:58.458 lat (usec) : 250=89.87%, 500=6.19% 00:12:58.458 lat (msec) : 50=3.94% 00:12:58.458 cpu : usr=0.10%, sys=0.68%, ctx=537, majf=0, minf=1 00:12:58.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.458 job2: (groupid=0, jobs=1): err= 0: pid=1467105: Mon Oct 7 09:33:52 2024 00:12:58.458 read: IOPS=1043, BW=4174KiB/s (4274kB/s)(4324KiB/1036msec) 00:12:58.458 slat (nsec): min=6471, max=52174, avg=13087.16, stdev=6851.99 00:12:58.458 clat (usec): min=199, max=41202, avg=651.72, stdev=3898.68 00:12:58.458 lat (usec): min=207, max=41214, avg=664.81, stdev=3900.18 00:12:58.458 clat percentiles (usec): 00:12:58.458 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:12:58.458 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 281], 00:12:58.458 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 367], 95.00th=[ 400], 00:12:58.458 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:58.458 | 99.99th=[41157] 00:12:58.458 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:12:58.458 slat (nsec): min=8668, max=62160, avg=12246.15, stdev=5328.23 00:12:58.458 clat (usec): min=137, max=384, avg=188.13, stdev=39.04 00:12:58.458 lat (usec): min=146, max=424, avg=200.37, stdev=40.92 00:12:58.458 clat percentiles (usec): 00:12:58.458 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:12:58.458 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 188], 00:12:58.458 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 265], 00:12:58.458 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 375], 99.95th=[ 383], 00:12:58.458 | 99.99th=[ 383] 00:12:58.458 bw ( KiB/s): min= 4096, max= 8192, per=51.80%, avg=6144.00, stdev=2896.31, samples=2 00:12:58.458 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:12:58.458 lat (usec) : 250=72.95%, 500=26.60%, 750=0.08% 00:12:58.458 lat (msec) : 50=0.38% 00:12:58.458 cpu : usr=2.22%, sys=2.90%, ctx=2617, majf=0, minf=1 00:12:58.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 issued rwts: total=1081,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.458 job3: (groupid=0, jobs=1): err= 0: pid=1467111: Mon Oct 7 09:33:52 2024 00:12:58.458 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:12:58.458 slat (nsec): min=11104, max=28013, avg=17914.86, stdev=3658.86 00:12:58.458 clat (usec): min=289, max=41961, avg=37456.96, stdev=12018.04 00:12:58.458 lat (usec): min=309, max=41981, avg=37474.88, stdev=12018.56 00:12:58.458 clat percentiles (usec): 00:12:58.458 | 1.00th=[ 289], 5.00th=[ 400], 10.00th=[40633], 20.00th=[41157], 00:12:58.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:58.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:12:58.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:58.458 | 99.99th=[42206] 00:12:58.458 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:58.458 slat (usec): min=11, max=40720, avg=123.88, stdev=1915.61 00:12:58.458 clat (usec): min=158, max=336, avg=213.93, stdev=26.51 00:12:58.458 lat (usec): min=173, max=40998, avg=337.81, stdev=1920.32 00:12:58.458 clat percentiles (usec): 00:12:58.458 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 194], 00:12:58.458 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:12:58.458 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 265], 00:12:58.458 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 338], 00:12:58.458 | 99.99th=[ 338] 00:12:58.458 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:12:58.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:58.458 lat (usec) : 250=87.27%, 500=8.99% 00:12:58.458 lat (msec) : 50=3.75% 00:12:58.458 cpu : usr=0.70%, sys=0.80%, ctx=538, majf=0, minf=1 00:12:58.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.458 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.458 00:12:58.458 Run status group 0 (all jobs): 00:12:58.458 READ: bw=4425KiB/s (4531kB/s), 81.8KiB/s-4174KiB/s (83.8kB/s-4274kB/s), io=4584KiB (4694kB), run=1001-1036msec 00:12:58.458 WRITE: bw=11.6MiB/s (12.1MB/s), 1994KiB/s-5931KiB/s (2042kB/s-6073kB/s), io=12.0MiB (12.6MB), run=1001-1036msec 00:12:58.458 00:12:58.458 Disk stats (read/write): 00:12:58.458 nvme0n1: ios=67/512, merge=0/0, ticks=727/115, in_queue=842, util=86.07% 00:12:58.458 nvme0n2: ios=42/512, merge=0/0, ticks=1602/99, in_queue=1701, util=95.32% 00:12:58.458 nvme0n3: ios=1133/1536, merge=0/0, ticks=569/288, in_queue=857, util=94.53% 00:12:58.458 nvme0n4: ios=45/512, merge=0/0, ticks=1564/105, in_queue=1669, util=96.61% 00:12:58.458 09:33:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:58.458 [global] 00:12:58.458 thread=1 00:12:58.458 invalidate=1 00:12:58.458 rw=randwrite 00:12:58.458 time_based=1 00:12:58.458 runtime=1 00:12:58.458 ioengine=libaio 00:12:58.458 direct=1 00:12:58.458 bs=4096 00:12:58.458 iodepth=1 00:12:58.458 norandommap=0 00:12:58.458 numjobs=1 00:12:58.458 00:12:58.458 verify_dump=1 00:12:58.459 verify_backlog=512 00:12:58.459 verify_state_save=0 00:12:58.459 do_verify=1 00:12:58.459 verify=crc32c-intel 00:12:58.459 [job0] 00:12:58.459 filename=/dev/nvme0n1 00:12:58.459 [job1] 00:12:58.459 filename=/dev/nvme0n2 00:12:58.459 [job2] 00:12:58.459 filename=/dev/nvme0n3 00:12:58.459 [job3] 00:12:58.459 filename=/dev/nvme0n4 00:12:58.459 Could not set queue depth (nvme0n1) 00:12:58.459 Could not set queue depth (nvme0n2) 00:12:58.459 Could not set queue depth (nvme0n3) 00:12:58.459 Could not set queue depth (nvme0n4) 00:12:58.459 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.459 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.459 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.459 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.459 fio-3.35 00:12:58.459 Starting 4 threads 00:12:59.833 00:12:59.833 job0: (groupid=0, jobs=1): err= 0: pid=1467444: Mon Oct 7 09:33:54 2024 00:12:59.833 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:12:59.833 slat (nsec): min=10813, max=21633, avg=15533.78, stdev=2797.16 00:12:59.833 clat (usec): min=238, max=41993, avg=39256.39, stdev=8510.54 00:12:59.833 lat (usec): min=251, max=42005, avg=39271.93, stdev=8511.14 00:12:59.833 clat percentiles (usec): 00:12:59.833 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:59.833 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:59.834 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:12:59.834 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.834 | 99.99th=[42206] 00:12:59.834 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:12:59.834 slat (nsec): min=10440, max=60456, avg=13610.26, stdev=3098.46 00:12:59.834 clat (usec): min=151, max=333, avg=186.15, stdev=17.82 00:12:59.834 lat (usec): min=163, max=393, avg=199.76, stdev=18.81 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:12:59.834 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:12:59.834 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:12:59.834 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 334], 99.95th=[ 334], 00:12:59.834 | 99.99th=[ 334] 00:12:59.834 bw ( KiB/s): min= 4096, max= 4096, per=18.33%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.834 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.834 lat (usec) : 250=95.70%, 500=0.19% 00:12:59.834 lat (msec) : 50=4.11% 00:12:59.834 cpu : usr=0.40%, sys=0.99%, ctx=535, majf=0, minf=1 00:12:59.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.834 job1: (groupid=0, jobs=1): err= 0: pid=1467445: Mon Oct 7 09:33:54 2024 00:12:59.834 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:12:59.834 slat (nsec): min=7810, max=55229, avg=10908.97, stdev=4885.29 00:12:59.834 clat (usec): min=184, max=41279, avg=470.19, stdev=3046.81 00:12:59.834 lat (usec): min=194, max=41290, avg=481.10, stdev=3047.33 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:12:59.834 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:12:59.834 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:12:59.834 | 99.00th=[ 457], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:12:59.834 | 99.99th=[41157] 00:12:59.834 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:59.834 slat (nsec): min=9237, max=53752, avg=13049.10, stdev=4781.15 00:12:59.834 clat (usec): min=120, max=1115, avg=189.82, stdev=49.46 00:12:59.834 lat (usec): min=147, max=1127, avg=202.87, stdev=49.44 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:12:59.834 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 192], 00:12:59.834 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 237], 95.00th=[ 253], 00:12:59.834 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 816], 99.95th=[ 1123], 00:12:59.834 | 99.99th=[ 1123] 00:12:59.834 bw ( KiB/s): min= 4096, max= 4096, per=18.33%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.834 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.834 lat (usec) : 250=85.53%, 500=13.99%, 750=0.10%, 1000=0.03% 00:12:59.834 lat (msec) : 2=0.03%, 4=0.03%, 50=0.27% 00:12:59.834 cpu : usr=2.20%, sys=4.70%, ctx=2944, majf=0, minf=1 00:12:59.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.834 job2: (groupid=0, jobs=1): err= 0: pid=1467446: Mon Oct 7 09:33:54 2024 00:12:59.834 read: IOPS=1186, BW=4747KiB/s (4861kB/s)(4752KiB/1001msec) 00:12:59.834 slat (nsec): min=6300, max=41148, avg=10765.10, stdev=4049.99 00:12:59.834 clat (usec): min=200, max=41027, avg=539.15, stdev=3330.02 00:12:59.834 lat (usec): min=209, max=41045, avg=549.92, stdev=3330.65 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:12:59.834 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:12:59.834 | 70.00th=[ 262], 80.00th=[ 293], 90.00th=[ 347], 95.00th=[ 449], 00:12:59.834 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:59.834 | 99.99th=[41157] 00:12:59.834 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:59.834 slat (nsec): min=8658, max=69211, avg=13627.40, stdev=4770.21 00:12:59.834 clat (usec): min=146, max=425, avg=206.00, stdev=43.52 00:12:59.834 lat (usec): min=156, max=447, avg=219.63, stdev=44.72 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 167], 00:12:59.834 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 208], 00:12:59.834 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 281], 95.00th=[ 293], 00:12:59.834 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 424], 99.95th=[ 424], 00:12:59.834 | 99.99th=[ 424] 00:12:59.834 bw ( KiB/s): min= 8192, max= 8192, per=36.65%, avg=8192.00, stdev= 0.00, samples=1 00:12:59.834 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:59.834 lat (usec) : 250=73.83%, 500=25.73%, 750=0.15% 00:12:59.834 lat (msec) : 50=0.29% 00:12:59.834 cpu : usr=1.90%, sys=4.40%, ctx=2725, majf=0, minf=1 00:12:59.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 issued rwts: total=1188,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.834 job3: (groupid=0, jobs=1): err= 0: pid=1467447: Mon Oct 7 09:33:54 2024 00:12:59.834 read: IOPS=1986, BW=7944KiB/s (8135kB/s)(7952KiB/1001msec) 00:12:59.834 slat (nsec): min=7749, max=36284, avg=9530.04, stdev=1996.36 00:12:59.834 clat (usec): min=192, max=552, avg=268.03, stdev=59.00 00:12:59.834 lat (usec): min=201, max=570, avg=277.56, stdev=59.66 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:12:59.834 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 00:12:59.834 | 70.00th=[ 269], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 404], 00:12:59.834 | 99.00th=[ 457], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 553], 00:12:59.834 | 99.99th=[ 553] 00:12:59.834 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:59.834 slat (nsec): min=9696, max=48220, avg=12143.99, stdev=2520.17 00:12:59.834 clat (usec): min=146, max=1644, avg=200.21, stdev=49.17 00:12:59.834 lat (usec): min=156, max=1656, avg=212.35, stdev=49.51 00:12:59.834 clat percentiles (usec): 00:12:59.834 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:12:59.834 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 202], 00:12:59.834 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 260], 00:12:59.834 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 701], 99.95th=[ 783], 00:12:59.834 | 99.99th=[ 1647] 00:12:59.834 bw ( KiB/s): min= 8904, max= 8904, per=39.84%, avg=8904.00, stdev= 0.00, samples=1 00:12:59.834 iops : min= 2226, max= 2226, avg=2226.00, stdev= 0.00, samples=1 00:12:59.834 lat (usec) : 250=73.59%, 500=26.02%, 750=0.35%, 1000=0.02% 00:12:59.834 lat (msec) : 2=0.02% 00:12:59.834 cpu : usr=2.90%, sys=6.40%, ctx=4036, majf=0, minf=2 00:12:59.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.834 issued rwts: total=1988,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.834 00:12:59.834 Run status group 0 (all jobs): 00:12:59.834 READ: bw=17.9MiB/s (18.7MB/s), 91.3KiB/s-7944KiB/s (93.5kB/s-8135kB/s), io=18.0MiB (18.9MB), run=1001-1008msec 00:12:59.834 WRITE: bw=21.8MiB/s (22.9MB/s), 2032KiB/s-8184KiB/s (2081kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1008msec 00:12:59.834 00:12:59.834 Disk stats (read/write): 00:12:59.834 nvme0n1: ios=69/512, merge=0/0, ticks=765/95, in_queue=860, util=86.77% 00:12:59.834 nvme0n2: ios=1074/1225, merge=0/0, ticks=627/236, in_queue=863, util=90.85% 00:12:59.834 nvme0n3: ios=1061/1145, merge=0/0, ticks=985/222, in_queue=1207, util=99.69% 00:12:59.834 nvme0n4: ios=1636/2048, merge=0/0, ticks=467/390, in_queue=857, util=95.79% 00:12:59.834 09:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:59.834 [global] 00:12:59.834 thread=1 00:12:59.834 invalidate=1 00:12:59.834 rw=write 00:12:59.834 time_based=1 00:12:59.834 runtime=1 00:12:59.834 ioengine=libaio 00:12:59.834 direct=1 00:12:59.834 bs=4096 00:12:59.834 iodepth=128 00:12:59.834 norandommap=0 00:12:59.834 numjobs=1 00:12:59.834 00:12:59.834 verify_dump=1 00:12:59.834 verify_backlog=512 00:12:59.834 verify_state_save=0 00:12:59.834 do_verify=1 00:12:59.834 verify=crc32c-intel 00:12:59.834 [job0] 00:12:59.834 filename=/dev/nvme0n1 00:12:59.834 [job1] 00:12:59.834 filename=/dev/nvme0n2 00:12:59.834 [job2] 00:12:59.834 filename=/dev/nvme0n3 00:12:59.834 [job3] 00:12:59.834 filename=/dev/nvme0n4 00:12:59.834 Could not set queue depth (nvme0n1) 00:12:59.834 Could not set queue depth (nvme0n2) 00:12:59.834 Could not set queue depth (nvme0n3) 00:12:59.834 Could not set queue depth (nvme0n4) 00:12:59.834 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.834 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.834 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.834 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.834 fio-3.35 00:12:59.834 Starting 4 threads 00:13:01.209 00:13:01.209 job0: (groupid=0, jobs=1): err= 0: pid=1467679: Mon Oct 7 09:33:55 2024 00:13:01.209 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:13:01.209 slat (usec): min=4, max=10036, avg=87.96, stdev=498.49 00:13:01.209 clat (usec): min=1106, max=46380, avg=11801.29, stdev=4970.73 00:13:01.209 lat (usec): min=1114, max=50872, avg=11889.25, stdev=4994.81 00:13:01.209 clat percentiles (usec): 00:13:01.209 | 1.00th=[ 3392], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[10028], 00:13:01.209 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:13:01.209 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13566], 95.00th=[17171], 00:13:01.209 | 99.00th=[39060], 99.50th=[40633], 99.90th=[46400], 99.95th=[46400], 00:13:01.209 | 99.99th=[46400] 00:13:01.209 write: IOPS=5444, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1002msec); 0 zone resets 00:13:01.209 slat (usec): min=5, max=8180, avg=89.71, stdev=413.49 00:13:01.209 clat (usec): min=542, max=45615, avg=12185.86, stdev=5302.10 00:13:01.209 lat (usec): min=3501, max=45624, avg=12275.58, stdev=5344.40 00:13:01.209 clat percentiles (usec): 00:13:01.209 | 1.00th=[ 4359], 5.00th=[ 7111], 10.00th=[ 8979], 20.00th=[10159], 00:13:01.209 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:13:01.209 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15401], 95.00th=[23725], 00:13:01.209 | 99.00th=[37487], 99.50th=[40633], 99.90th=[44303], 99.95th=[44303], 00:13:01.209 | 99.99th=[45876] 00:13:01.209 bw ( KiB/s): min=20480, max=22144, per=31.15%, avg=21312.00, stdev=1176.63, samples=2 00:13:01.210 iops : min= 5120, max= 5536, avg=5328.00, stdev=294.16, samples=2 00:13:01.210 lat (usec) : 750=0.01% 00:13:01.210 lat (msec) : 2=0.17%, 4=0.90%, 10=17.65%, 20=75.62%, 50=5.65% 00:13:01.210 cpu : usr=6.39%, sys=9.69%, ctx=598, majf=0, minf=1 00:13:01.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:01.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.210 issued rwts: total=5120,5455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.210 job1: (groupid=0, jobs=1): err= 0: pid=1467680: Mon Oct 7 09:33:55 2024 00:13:01.210 read: IOPS=4025, BW=15.7MiB/s (16.5MB/s)(15.9MiB/1008msec) 00:13:01.210 slat (usec): min=2, max=11549, avg=108.66, stdev=709.91 00:13:01.210 clat (usec): min=4007, max=37334, avg=13446.14, stdev=4570.71 00:13:01.210 lat (usec): min=4019, max=37342, avg=13554.80, stdev=4624.52 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 5735], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11207], 00:13:01.210 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12649], 00:13:01.210 | 70.00th=[13566], 80.00th=[15664], 90.00th=[17695], 95.00th=[22938], 00:13:01.210 | 99.00th=[32113], 99.50th=[33162], 99.90th=[37487], 99.95th=[37487], 00:13:01.210 | 99.99th=[37487] 00:13:01.210 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:13:01.210 slat (usec): min=4, max=15995, avg=126.15, stdev=764.05 00:13:01.210 clat (usec): min=1220, max=45147, avg=17878.75, stdev=9345.17 00:13:01.210 lat (usec): min=1226, max=45157, avg=18004.90, stdev=9423.85 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 3752], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[ 9896], 00:13:01.210 | 30.00th=[11600], 40.00th=[12387], 50.00th=[14222], 60.00th=[19530], 00:13:01.210 | 70.00th=[22152], 80.00th=[26608], 90.00th=[31589], 95.00th=[34866], 00:13:01.210 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:13:01.210 | 99.99th=[45351] 00:13:01.210 bw ( KiB/s): min=12288, max=20480, per=23.94%, avg=16384.00, stdev=5792.62, samples=2 00:13:01.210 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:13:01.210 lat (msec) : 2=0.17%, 4=0.37%, 10=15.42%, 20=62.08%, 50=21.96% 00:13:01.210 cpu : usr=3.97%, sys=4.87%, ctx=339, majf=0, minf=2 00:13:01.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:01.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.210 issued rwts: total=4058,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.210 job2: (groupid=0, jobs=1): err= 0: pid=1467681: Mon Oct 7 09:33:55 2024 00:13:01.210 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:13:01.210 slat (usec): min=3, max=16550, avg=150.18, stdev=926.39 00:13:01.210 clat (usec): min=3129, max=59257, avg=19205.33, stdev=7994.12 00:13:01.210 lat (usec): min=3269, max=59276, avg=19355.51, stdev=8058.05 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 9110], 5.00th=[12387], 10.00th=[13566], 20.00th=[14353], 00:13:01.210 | 30.00th=[14746], 40.00th=[16909], 50.00th=[17957], 60.00th=[18220], 00:13:01.210 | 70.00th=[19792], 80.00th=[21627], 90.00th=[26870], 95.00th=[34866], 00:13:01.210 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[58983], 00:13:01.210 | 99.99th=[59507] 00:13:01.210 write: IOPS=3216, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1008msec); 0 zone resets 00:13:01.210 slat (usec): min=4, max=24357, avg=158.83, stdev=948.99 00:13:01.210 clat (usec): min=2611, max=65212, avg=21185.83, stdev=11829.85 00:13:01.210 lat (usec): min=2620, max=65238, avg=21344.66, stdev=11893.73 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 9372], 5.00th=[11076], 10.00th=[12649], 20.00th=[13960], 00:13:01.210 | 30.00th=[14353], 40.00th=[15270], 50.00th=[15926], 60.00th=[17171], 00:13:01.210 | 70.00th=[22152], 80.00th=[27132], 90.00th=[39060], 95.00th=[49021], 00:13:01.210 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:13:01.210 | 99.99th=[65274] 00:13:01.210 bw ( KiB/s): min= 8528, max=16384, per=18.20%, avg=12456.00, stdev=5555.03, samples=2 00:13:01.210 iops : min= 2132, max= 4096, avg=3114.00, stdev=1388.76, samples=2 00:13:01.210 lat (msec) : 4=0.27%, 10=1.27%, 20=67.77%, 50=27.27%, 100=3.42% 00:13:01.210 cpu : usr=2.28%, sys=4.77%, ctx=369, majf=0, minf=1 00:13:01.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:01.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.210 issued rwts: total=3072,3242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.210 job3: (groupid=0, jobs=1): err= 0: pid=1467682: Mon Oct 7 09:33:55 2024 00:13:01.210 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:13:01.210 slat (usec): min=3, max=26042, avg=133.51, stdev=1026.64 00:13:01.210 clat (usec): min=4852, max=90446, avg=16289.99, stdev=9512.41 00:13:01.210 lat (usec): min=4862, max=90466, avg=16423.49, stdev=9602.26 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11863], 00:13:01.210 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13829], 60.00th=[14877], 00:13:01.210 | 70.00th=[16188], 80.00th=[19006], 90.00th=[21890], 95.00th=[26870], 00:13:01.210 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[88605], 00:13:01.210 | 99.99th=[90702] 00:13:01.210 write: IOPS=4453, BW=17.4MiB/s (18.2MB/s)(17.6MiB/1011msec); 0 zone resets 00:13:01.210 slat (usec): min=5, max=21264, avg=92.96, stdev=547.90 00:13:01.210 clat (usec): min=1594, max=74769, avg=13362.48, stdev=5920.68 00:13:01.210 lat (usec): min=1606, max=74794, avg=13455.43, stdev=5937.53 00:13:01.210 clat percentiles (usec): 00:13:01.210 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 8225], 20.00th=[10945], 00:13:01.210 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:13:01.210 | 70.00th=[13173], 80.00th=[13698], 90.00th=[17433], 95.00th=[22938], 00:13:01.210 | 99.00th=[35914], 99.50th=[49021], 99.90th=[49021], 99.95th=[71828], 00:13:01.210 | 99.99th=[74974] 00:13:01.210 bw ( KiB/s): min=15344, max=19656, per=25.57%, avg=17500.00, stdev=3049.04, samples=2 00:13:01.210 iops : min= 3836, max= 4914, avg=4375.00, stdev=762.26, samples=2 00:13:01.210 lat (msec) : 2=0.07%, 4=0.23%, 10=10.97%, 20=76.65%, 50=10.60% 00:13:01.210 lat (msec) : 100=1.49% 00:13:01.210 cpu : usr=5.35%, sys=5.74%, ctx=483, majf=0, minf=1 00:13:01.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:01.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.210 issued rwts: total=4096,4502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.210 00:13:01.210 Run status group 0 (all jobs): 00:13:01.210 READ: bw=63.2MiB/s (66.2MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=63.9MiB (67.0MB), run=1002-1011msec 00:13:01.210 WRITE: bw=66.8MiB/s (70.1MB/s), 12.6MiB/s-21.3MiB/s (13.2MB/s-22.3MB/s), io=67.6MiB (70.8MB), run=1002-1011msec 00:13:01.210 00:13:01.210 Disk stats (read/write): 00:13:01.210 nvme0n1: ios=4177/4608, merge=0/0, ticks=24746/28197, in_queue=52943, util=99.30% 00:13:01.210 nvme0n2: ios=3463/3584, merge=0/0, ticks=37033/51623, in_queue=88656, util=90.13% 00:13:01.210 nvme0n3: ios=2617/2919, merge=0/0, ticks=27053/30112, in_queue=57165, util=97.27% 00:13:01.210 nvme0n4: ios=3304/3584, merge=0/0, ticks=39718/32834, in_queue=72552, util=95.34% 00:13:01.210 09:33:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:01.210 [global] 00:13:01.210 thread=1 00:13:01.210 invalidate=1 00:13:01.210 rw=randwrite 00:13:01.210 time_based=1 00:13:01.210 runtime=1 00:13:01.210 ioengine=libaio 00:13:01.210 direct=1 00:13:01.210 bs=4096 00:13:01.210 iodepth=128 00:13:01.210 norandommap=0 00:13:01.210 numjobs=1 00:13:01.210 00:13:01.210 verify_dump=1 00:13:01.210 verify_backlog=512 00:13:01.210 verify_state_save=0 00:13:01.210 do_verify=1 00:13:01.210 verify=crc32c-intel 00:13:01.210 [job0] 00:13:01.210 filename=/dev/nvme0n1 00:13:01.210 [job1] 00:13:01.210 filename=/dev/nvme0n2 00:13:01.210 [job2] 00:13:01.210 filename=/dev/nvme0n3 00:13:01.210 [job3] 00:13:01.210 filename=/dev/nvme0n4 00:13:01.210 Could not set queue depth (nvme0n1) 00:13:01.210 Could not set queue depth (nvme0n2) 00:13:01.210 Could not set queue depth (nvme0n3) 00:13:01.210 Could not set queue depth (nvme0n4) 00:13:01.468 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.468 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.468 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.468 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.468 fio-3.35 00:13:01.468 Starting 4 threads 00:13:02.844 00:13:02.845 job0: (groupid=0, jobs=1): err= 0: pid=1467908: Mon Oct 7 09:33:57 2024 00:13:02.845 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:13:02.845 slat (usec): min=3, max=11107, avg=103.84, stdev=662.68 00:13:02.845 clat (usec): min=4082, max=41420, avg=12576.68, stdev=4176.09 00:13:02.845 lat (usec): min=4091, max=41429, avg=12680.52, stdev=4224.51 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 7046], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10552], 00:13:02.845 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:13:02.845 | 70.00th=[12518], 80.00th=[13304], 90.00th=[16581], 95.00th=[19792], 00:13:02.845 | 99.00th=[32375], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:13:02.845 | 99.99th=[41681] 00:13:02.845 write: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:13:02.845 slat (usec): min=4, max=12946, avg=92.37, stdev=523.01 00:13:02.845 clat (usec): min=521, max=51675, avg=13638.49, stdev=8229.06 00:13:02.845 lat (usec): min=530, max=51683, avg=13730.86, stdev=8272.33 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 996], 5.00th=[ 5211], 10.00th=[ 6521], 20.00th=[ 9241], 00:13:02.845 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:13:02.845 | 70.00th=[12780], 80.00th=[16909], 90.00th=[23462], 95.00th=[34341], 00:13:02.845 | 99.00th=[44827], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:13:02.845 | 99.99th=[51643] 00:13:02.845 bw ( KiB/s): min=18992, max=20464, per=29.56%, avg=19728.00, stdev=1040.86, samples=2 00:13:02.845 iops : min= 4748, max= 5116, avg=4932.00, stdev=260.22, samples=2 00:13:02.845 lat (usec) : 750=0.05%, 1000=0.50% 00:13:02.845 lat (msec) : 2=0.53%, 4=0.84%, 10=17.16%, 20=69.28%, 50=11.50% 00:13:02.845 lat (msec) : 100=0.14% 00:13:02.845 cpu : usr=4.39%, sys=7.98%, ctx=503, majf=0, minf=1 00:13:02.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:02.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.845 issued rwts: total=4608,5060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.845 job1: (groupid=0, jobs=1): err= 0: pid=1467910: Mon Oct 7 09:33:57 2024 00:13:02.845 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:02.845 slat (usec): min=2, max=16086, avg=124.35, stdev=868.19 00:13:02.845 clat (usec): min=3827, max=48586, avg=15128.30, stdev=6665.76 00:13:02.845 lat (usec): min=3831, max=48604, avg=15252.65, stdev=6764.00 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11338], 00:13:02.845 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:13:02.845 | 70.00th=[14615], 80.00th=[21627], 90.00th=[23987], 95.00th=[28705], 00:13:02.845 | 99.00th=[36963], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:13:02.845 | 99.99th=[48497] 00:13:02.845 write: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1005msec); 0 zone resets 00:13:02.845 slat (usec): min=3, max=13788, avg=134.10, stdev=758.19 00:13:02.845 clat (usec): min=1684, max=60922, avg=18246.96, stdev=13213.45 00:13:02.845 lat (usec): min=4033, max=60928, avg=18381.06, stdev=13311.00 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[10683], 20.00th=[11207], 00:13:02.845 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:13:02.845 | 70.00th=[14877], 80.00th=[22152], 90.00th=[45876], 95.00th=[51643], 00:13:02.845 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:13:02.845 | 99.99th=[61080] 00:13:02.845 bw ( KiB/s): min= 9568, max=21408, per=23.21%, avg=15488.00, stdev=8372.14, samples=2 00:13:02.845 iops : min= 2392, max= 5352, avg=3872.00, stdev=2093.04, samples=2 00:13:02.845 lat (msec) : 2=0.01%, 4=0.11%, 10=7.08%, 20=70.57%, 50=18.62% 00:13:02.845 lat (msec) : 100=3.61% 00:13:02.845 cpu : usr=1.69%, sys=4.48%, ctx=372, majf=0, minf=1 00:13:02.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.845 issued rwts: total=3584,4000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.845 job2: (groupid=0, jobs=1): err= 0: pid=1467912: Mon Oct 7 09:33:57 2024 00:13:02.845 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:13:02.845 slat (usec): min=3, max=10637, avg=120.71, stdev=693.35 00:13:02.845 clat (usec): min=7910, max=34414, avg=15626.90, stdev=4166.94 00:13:02.845 lat (usec): min=7922, max=34433, avg=15747.61, stdev=4214.58 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11863], 20.00th=[13173], 00:13:02.845 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14877], 00:13:02.845 | 70.00th=[16581], 80.00th=[17957], 90.00th=[22676], 95.00th=[23725], 00:13:02.845 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31065], 99.95th=[33817], 00:13:02.845 | 99.99th=[34341] 00:13:02.845 write: IOPS=4006, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1008msec); 0 zone resets 00:13:02.845 slat (usec): min=5, max=11928, avg=132.85, stdev=818.93 00:13:02.845 clat (usec): min=4464, max=54550, avg=17677.47, stdev=7093.18 00:13:02.845 lat (usec): min=7788, max=54564, avg=17810.32, stdev=7164.07 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[13042], 20.00th=[13566], 00:13:02.845 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[15533], 00:13:02.845 | 70.00th=[17957], 80.00th=[22676], 90.00th=[25560], 95.00th=[30540], 00:13:02.845 | 99.00th=[49546], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:13:02.845 | 99.99th=[54789] 00:13:02.845 bw ( KiB/s): min=15416, max=15872, per=23.44%, avg=15644.00, stdev=322.44, samples=2 00:13:02.845 iops : min= 3854, max= 3968, avg=3911.00, stdev=80.61, samples=2 00:13:02.845 lat (msec) : 10=1.88%, 20=78.43%, 50=19.39%, 100=0.30% 00:13:02.845 cpu : usr=3.77%, sys=7.35%, ctx=335, majf=0, minf=1 00:13:02.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.845 issued rwts: total=3584,4039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.845 job3: (groupid=0, jobs=1): err= 0: pid=1467913: Mon Oct 7 09:33:57 2024 00:13:02.845 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:02.845 slat (usec): min=3, max=20442, avg=145.95, stdev=954.36 00:13:02.845 clat (usec): min=6525, max=91993, avg=18278.89, stdev=13932.83 00:13:02.845 lat (usec): min=6530, max=92011, avg=18424.83, stdev=14042.42 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10945], 20.00th=[12649], 00:13:02.845 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14353], 60.00th=[15139], 00:13:02.845 | 70.00th=[16188], 80.00th=[17957], 90.00th=[21103], 95.00th=[50594], 00:13:02.845 | 99.00th=[81265], 99.50th=[85459], 99.90th=[89654], 99.95th=[91751], 00:13:02.845 | 99.99th=[91751] 00:13:02.845 write: IOPS=3698, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec); 0 zone resets 00:13:02.845 slat (usec): min=4, max=10550, avg=122.57, stdev=664.03 00:13:02.845 clat (usec): min=674, max=62536, avg=16510.67, stdev=9215.49 00:13:02.845 lat (usec): min=5848, max=62543, avg=16633.24, stdev=9270.52 00:13:02.845 clat percentiles (usec): 00:13:02.845 | 1.00th=[ 6390], 5.00th=[ 8979], 10.00th=[11731], 20.00th=[12649], 00:13:02.845 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[14222], 00:13:02.845 | 70.00th=[14615], 80.00th=[17957], 90.00th=[23200], 95.00th=[31851], 00:13:02.845 | 99.00th=[62129], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:13:02.845 | 99.99th=[62653] 00:13:02.845 bw ( KiB/s): min=14224, max=14488, per=21.51%, avg=14356.00, stdev=186.68, samples=2 00:13:02.845 iops : min= 3556, max= 3622, avg=3589.00, stdev=46.67, samples=2 00:13:02.845 lat (usec) : 750=0.01% 00:13:02.845 lat (msec) : 10=6.15%, 20=79.19%, 50=10.60%, 100=4.04% 00:13:02.845 cpu : usr=2.39%, sys=5.28%, ctx=399, majf=0, minf=1 00:13:02.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:02.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.845 issued rwts: total=3584,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.845 00:13:02.845 Run status group 0 (all jobs): 00:13:02.845 READ: bw=59.5MiB/s (62.4MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=60.0MiB (62.9MB), run=1003-1008msec 00:13:02.845 WRITE: bw=65.2MiB/s (68.3MB/s), 14.4MiB/s-19.7MiB/s (15.1MB/s-20.7MB/s), io=65.7MiB (68.9MB), run=1003-1008msec 00:13:02.845 00:13:02.845 Disk stats (read/write): 00:13:02.845 nvme0n1: ios=3878/4096, merge=0/0, ticks=43083/53419, in_queue=96502, util=86.57% 00:13:02.845 nvme0n2: ios=2977/3072, merge=0/0, ticks=20165/25216, in_queue=45381, util=90.35% 00:13:02.845 nvme0n3: ios=3124/3487, merge=0/0, ticks=24959/26932, in_queue=51891, util=98.74% 00:13:02.845 nvme0n4: ios=2810/3072, merge=0/0, ticks=21528/19793, in_queue=41321, util=98.63% 00:13:02.845 09:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:02.845 09:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1468051 00:13:02.845 09:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:02.845 09:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:02.845 [global] 00:13:02.845 thread=1 00:13:02.845 invalidate=1 00:13:02.845 rw=read 00:13:02.845 time_based=1 00:13:02.845 runtime=10 00:13:02.845 ioengine=libaio 00:13:02.845 direct=1 00:13:02.845 bs=4096 00:13:02.845 iodepth=1 00:13:02.845 norandommap=1 00:13:02.845 numjobs=1 00:13:02.845 00:13:02.845 [job0] 00:13:02.845 filename=/dev/nvme0n1 00:13:02.845 [job1] 00:13:02.845 filename=/dev/nvme0n2 00:13:02.845 [job2] 00:13:02.845 filename=/dev/nvme0n3 00:13:02.845 [job3] 00:13:02.845 filename=/dev/nvme0n4 00:13:02.845 Could not set queue depth (nvme0n1) 00:13:02.846 Could not set queue depth (nvme0n2) 00:13:02.846 Could not set queue depth (nvme0n3) 00:13:02.846 Could not set queue depth (nvme0n4) 00:13:02.846 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.846 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.846 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.846 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.846 fio-3.35 00:13:02.846 Starting 4 threads 00:13:06.178 09:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:06.178 09:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:06.178 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=364544, buflen=4096 00:13:06.178 fio: pid=1468267, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:06.436 09:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.436 09:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:06.436 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=30224384, buflen=4096 00:13:06.436 fio: pid=1468266, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:07.003 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5160960, buflen=4096 00:13:07.003 fio: pid=1468255, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:07.003 09:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.003 09:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:07.569 09:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.569 09:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:07.569 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56098816, buflen=4096 00:13:07.569 fio: pid=1468265, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:07.569 00:13:07.569 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1468255: Mon Oct 7 09:34:02 2024 00:13:07.569 read: IOPS=322, BW=1289KiB/s (1320kB/s)(5040KiB/3909msec) 00:13:07.569 slat (usec): min=4, max=11421, avg=33.58, stdev=445.99 00:13:07.569 clat (usec): min=193, max=42022, avg=3046.77, stdev=10200.97 00:13:07.569 lat (usec): min=203, max=42039, avg=3080.37, stdev=10208.66 00:13:07.569 clat percentiles (usec): 00:13:07.569 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 235], 00:13:07.569 | 30.00th=[ 251], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 318], 00:13:07.569 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 461], 95.00th=[41157], 00:13:07.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:07.569 | 99.99th=[42206] 00:13:07.569 bw ( KiB/s): min= 96, max= 6096, per=6.81%, avg=1355.29, stdev=2315.47, samples=7 00:13:07.569 iops : min= 24, max= 1524, avg=338.71, stdev=578.79, samples=7 00:13:07.569 lat (usec) : 250=29.66%, 500=62.49%, 750=0.79% 00:13:07.569 lat (msec) : 2=0.08%, 4=0.16%, 50=6.74% 00:13:07.569 cpu : usr=0.18%, sys=0.44%, ctx=1267, majf=0, minf=1 00:13:07.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.569 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1468265: Mon Oct 7 09:34:02 2024 00:13:07.569 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(53.5MiB/4508msec) 00:13:07.569 slat (usec): min=6, max=23934, avg=13.86, stdev=274.75 00:13:07.569 clat (usec): min=178, max=41152, avg=311.06, stdev=1668.98 00:13:07.569 lat (usec): min=186, max=47937, avg=324.92, stdev=1703.93 00:13:07.569 clat percentiles (usec): 00:13:07.569 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 223], 00:13:07.569 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:13:07.569 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 281], 00:13:07.569 | 99.00th=[ 367], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41157], 00:13:07.569 | 99.99th=[41157] 00:13:07.569 bw ( KiB/s): min= 104, max=17192, per=67.28%, avg=13387.63, stdev=5530.67, samples=8 00:13:07.569 iops : min= 26, max= 4298, avg=3346.88, stdev=1382.67, samples=8 00:13:07.569 lat (usec) : 250=68.64%, 500=30.85%, 750=0.29%, 1000=0.02% 00:13:07.569 lat (msec) : 2=0.02%, 50=0.17% 00:13:07.569 cpu : usr=1.75%, sys=4.48%, ctx=13703, majf=0, minf=2 00:13:07.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 issued rwts: total=13697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.569 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1468266: Mon Oct 7 09:34:02 2024 00:13:07.569 read: IOPS=2187, BW=8751KiB/s (8961kB/s)(28.8MiB/3373msec) 00:13:07.569 slat (nsec): min=6636, max=38810, avg=9592.36, stdev=2277.55 00:13:07.569 clat (usec): min=187, max=41017, avg=441.89, stdev=2756.11 00:13:07.569 lat (usec): min=195, max=41034, avg=451.49, stdev=2756.53 00:13:07.569 clat percentiles (usec): 00:13:07.569 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:13:07.569 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:13:07.569 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:13:07.569 | 99.00th=[ 445], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:13:07.569 | 99.99th=[41157] 00:13:07.569 bw ( KiB/s): min= 96, max=15288, per=39.55%, avg=7870.67, stdev=7423.23, samples=6 00:13:07.569 iops : min= 24, max= 3822, avg=1967.67, stdev=1855.81, samples=6 00:13:07.569 lat (usec) : 250=53.40%, 500=45.93%, 750=0.16%, 1000=0.01% 00:13:07.569 lat (msec) : 2=0.01%, 50=0.46% 00:13:07.569 cpu : usr=1.39%, sys=2.61%, ctx=7380, majf=0, minf=1 00:13:07.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 issued rwts: total=7380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.569 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1468267: Mon Oct 7 09:34:02 2024 00:13:07.569 read: IOPS=29, BW=118KiB/s (121kB/s)(356KiB/3010msec) 00:13:07.569 slat (nsec): min=8963, max=28445, avg=17679.31, stdev=3279.41 00:13:07.569 clat (usec): min=287, max=42008, avg=33479.73, stdev=15692.40 00:13:07.569 lat (usec): min=299, max=42027, avg=33497.39, stdev=15693.54 00:13:07.569 clat percentiles (usec): 00:13:07.569 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 343], 20.00th=[40633], 00:13:07.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:07.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:07.569 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:07.569 | 99.99th=[42206] 00:13:07.569 bw ( KiB/s): min= 96, max= 160, per=0.59%, avg=118.67, stdev=25.51, samples=6 00:13:07.569 iops : min= 24, max= 40, avg=29.67, stdev= 6.38, samples=6 00:13:07.569 lat (usec) : 500=17.78% 00:13:07.569 lat (msec) : 50=81.11% 00:13:07.569 cpu : usr=0.00%, sys=0.07%, ctx=91, majf=0, minf=1 00:13:07.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.569 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.569 00:13:07.569 Run status group 0 (all jobs): 00:13:07.569 READ: bw=19.4MiB/s (20.4MB/s), 118KiB/s-11.9MiB/s (121kB/s-12.4MB/s), io=87.6MiB (91.8MB), run=3010-4508msec 00:13:07.570 00:13:07.570 Disk stats (read/write): 00:13:07.570 nvme0n1: ios=1297/0, merge=0/0, ticks=4785/0, in_queue=4785, util=99.32% 00:13:07.570 nvme0n2: ios=13692/0, merge=0/0, ticks=4036/0, in_queue=4036, util=95.74% 00:13:07.570 nvme0n3: ios=7379/0, merge=0/0, ticks=3247/0, in_queue=3247, util=96.99% 00:13:07.570 nvme0n4: ios=135/0, merge=0/0, ticks=3104/0, in_queue=3104, util=100.00% 00:13:07.827 09:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.827 09:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:08.761 09:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:08.761 09:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:09.327 09:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:09.327 09:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:09.893 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:09.893 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:10.151 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:10.151 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1468051 00:13:10.151 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:10.151 09:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:10.410 nvmf hotplug test: fio failed as expected 00:13:10.410 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.668 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.668 rmmod nvme_tcp 00:13:10.668 rmmod nvme_fabrics 00:13:10.668 rmmod nvme_keyring 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1465740 ']' 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1465740 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1465740 ']' 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1465740 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465740 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465740' 00:13:10.928 killing process with pid 1465740 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1465740 00:13:10.928 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1465740 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.187 09:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.088 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.088 00:13:13.088 real 0m30.585s 00:13:13.088 user 1m52.634s 00:13:13.088 sys 0m8.486s 00:13:13.088 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.088 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.088 ************************************ 00:13:13.088 END TEST nvmf_fio_target 00:13:13.088 ************************************ 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.348 ************************************ 00:13:13.348 START TEST nvmf_bdevio 00:13:13.348 ************************************ 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:13.348 * Looking for test storage... 00:13:13.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.348 09:34:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.348 --rc genhtml_branch_coverage=1 00:13:13.348 --rc genhtml_function_coverage=1 00:13:13.348 --rc genhtml_legend=1 00:13:13.348 --rc geninfo_all_blocks=1 00:13:13.348 --rc geninfo_unexecuted_blocks=1 00:13:13.348 00:13:13.348 ' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.348 --rc genhtml_branch_coverage=1 00:13:13.348 --rc genhtml_function_coverage=1 00:13:13.348 --rc genhtml_legend=1 00:13:13.348 --rc geninfo_all_blocks=1 00:13:13.348 --rc geninfo_unexecuted_blocks=1 00:13:13.348 00:13:13.348 ' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.348 --rc genhtml_branch_coverage=1 00:13:13.348 --rc genhtml_function_coverage=1 00:13:13.348 --rc genhtml_legend=1 00:13:13.348 --rc geninfo_all_blocks=1 00:13:13.348 --rc geninfo_unexecuted_blocks=1 00:13:13.348 00:13:13.348 ' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.348 --rc genhtml_branch_coverage=1 00:13:13.348 --rc genhtml_function_coverage=1 00:13:13.348 --rc genhtml_legend=1 00:13:13.348 --rc geninfo_all_blocks=1 00:13:13.348 --rc geninfo_unexecuted_blocks=1 00:13:13.348 00:13:13.348 ' 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.348 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.349 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.607 09:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:16.137 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:16.137 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.137 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:16.138 Found net devices under 0000:84:00.0: cvl_0_0 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:16.138 Found net devices under 0000:84:00.1: cvl_0_1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:13:16.138 00:13:16.138 --- 10.0.0.2 ping statistics --- 00:13:16.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.138 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:16.138 00:13:16.138 --- 10.0.0.1 ping statistics --- 00:13:16.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.138 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:16.138 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1471185 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1471185 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1471185 ']' 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.396 09:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.396 [2024-10-07 09:34:11.017962] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:13:16.396 [2024-10-07 09:34:11.018041] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.396 [2024-10-07 09:34:11.083249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.396 [2024-10-07 09:34:11.197358] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.396 [2024-10-07 09:34:11.197415] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.396 [2024-10-07 09:34:11.197429] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.396 [2024-10-07 09:34:11.197440] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.396 [2024-10-07 09:34:11.197450] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.396 [2024-10-07 09:34:11.199233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.396 [2024-10-07 09:34:11.199298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.396 [2024-10-07 09:34:11.199363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.396 [2024-10-07 09:34:11.199366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.655 [2024-10-07 09:34:11.372696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.655 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.656 Malloc0 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:16.656 [2024-10-07 09:34:11.424037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:16.656 { 00:13:16.656 "params": { 00:13:16.656 "name": "Nvme$subsystem", 00:13:16.656 "trtype": "$TEST_TRANSPORT", 00:13:16.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:16.656 "adrfam": "ipv4", 00:13:16.656 "trsvcid": "$NVMF_PORT", 00:13:16.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:16.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:16.656 "hdgst": ${hdgst:-false}, 00:13:16.656 "ddgst": ${ddgst:-false} 00:13:16.656 }, 00:13:16.656 "method": "bdev_nvme_attach_controller" 00:13:16.656 } 00:13:16.656 EOF 00:13:16.656 )") 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:13:16.656 09:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:16.656 "params": { 00:13:16.656 "name": "Nvme1", 00:13:16.656 "trtype": "tcp", 00:13:16.656 "traddr": "10.0.0.2", 00:13:16.656 "adrfam": "ipv4", 00:13:16.656 "trsvcid": "4420", 00:13:16.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:16.656 "hdgst": false, 00:13:16.656 "ddgst": false 00:13:16.656 }, 00:13:16.656 "method": "bdev_nvme_attach_controller" 00:13:16.656 }' 00:13:16.914 [2024-10-07 09:34:11.483243] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:13:16.914 [2024-10-07 09:34:11.483335] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471238 ] 00:13:16.914 [2024-10-07 09:34:11.556635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.914 [2024-10-07 09:34:11.675318] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.914 [2024-10-07 09:34:11.675350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.915 [2024-10-07 09:34:11.675354] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.172 I/O targets: 00:13:17.172 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:17.172 00:13:17.172 00:13:17.172 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.173 http://cunit.sourceforge.net/ 00:13:17.173 00:13:17.173 00:13:17.173 Suite: bdevio tests on: Nvme1n1 00:13:17.173 Test: blockdev write read block ...passed 00:13:17.431 Test: blockdev write zeroes read block ...passed 00:13:17.431 Test: blockdev write zeroes read no split ...passed 00:13:17.431 Test: blockdev write zeroes read split ...passed 00:13:17.431 Test: blockdev write zeroes read split partial ...passed 00:13:17.431 Test: blockdev reset ...[2024-10-07 09:34:12.022371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:17.431 [2024-10-07 09:34:12.022477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a00c0 (9): Bad file descriptor 00:13:17.431 [2024-10-07 09:34:12.036143] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:17.431 passed 00:13:17.431 Test: blockdev write read 8 blocks ...passed 00:13:17.431 Test: blockdev write read size > 128k ...passed 00:13:17.431 Test: blockdev write read invalid size ...passed 00:13:17.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.431 Test: blockdev write read max offset ...passed 00:13:17.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.431 Test: blockdev writev readv 8 blocks ...passed 00:13:17.431 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.689 Test: blockdev writev readv block ...passed 00:13:17.689 Test: blockdev writev readv size > 128k ...passed 00:13:17.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.689 Test: blockdev comparev and writev ...[2024-10-07 09:34:12.252040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.252076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.252101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.252119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.252585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.252610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.252633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.252650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.253119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.253144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.253175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.253191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.253594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.253618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.253639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.689 [2024-10-07 09:34:12.253656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:17.689 passed 00:13:17.689 Test: blockdev nvme passthru rw ...passed 00:13:17.689 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:34:12.336325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.689 [2024-10-07 09:34:12.336365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:17.689 [2024-10-07 09:34:12.336566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.689 [2024-10-07 09:34:12.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:17.690 [2024-10-07 09:34:12.336728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.690 [2024-10-07 09:34:12.336751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:17.690 [2024-10-07 09:34:12.336901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.690 [2024-10-07 09:34:12.336925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:17.690 passed 00:13:17.690 Test: blockdev nvme admin passthru ...passed 00:13:17.690 Test: blockdev copy ...passed 00:13:17.690 00:13:17.690 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.690 suites 1 1 n/a 0 0 00:13:17.690 tests 23 23 23 0 0 00:13:17.690 asserts 152 152 152 0 n/a 00:13:17.690 00:13:17.690 Elapsed time = 0.976 seconds 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.948 rmmod nvme_tcp 00:13:17.948 rmmod nvme_fabrics 00:13:17.948 rmmod nvme_keyring 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1471185 ']' 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1471185 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1471185 ']' 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1471185 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.948 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471185 00:13:18.206 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:18.206 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:18.206 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471185' 00:13:18.206 killing process with pid 1471185 00:13:18.206 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1471185 00:13:18.206 09:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1471185 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.465 09:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.373 00:13:20.373 real 0m7.171s 00:13:20.373 user 0m10.484s 00:13:20.373 sys 0m2.650s 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.373 ************************************ 00:13:20.373 END TEST nvmf_bdevio 00:13:20.373 ************************************ 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:20.373 00:13:20.373 real 4m25.331s 00:13:20.373 user 11m35.389s 00:13:20.373 sys 1m19.496s 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.373 09:34:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:20.373 ************************************ 00:13:20.373 END TEST nvmf_target_core 00:13:20.373 ************************************ 00:13:20.373 09:34:15 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:20.373 09:34:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.373 09:34:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.373 09:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.631 ************************************ 00:13:20.631 START TEST nvmf_target_extra 00:13:20.631 ************************************ 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:20.631 * Looking for test storage... 00:13:20.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.631 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.632 --rc genhtml_branch_coverage=1 00:13:20.632 --rc genhtml_function_coverage=1 00:13:20.632 --rc genhtml_legend=1 00:13:20.632 --rc geninfo_all_blocks=1 00:13:20.632 --rc geninfo_unexecuted_blocks=1 00:13:20.632 00:13:20.632 ' 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.632 --rc genhtml_branch_coverage=1 00:13:20.632 --rc genhtml_function_coverage=1 00:13:20.632 --rc genhtml_legend=1 00:13:20.632 --rc geninfo_all_blocks=1 00:13:20.632 --rc geninfo_unexecuted_blocks=1 00:13:20.632 00:13:20.632 ' 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.632 --rc genhtml_branch_coverage=1 00:13:20.632 --rc genhtml_function_coverage=1 00:13:20.632 --rc genhtml_legend=1 00:13:20.632 --rc geninfo_all_blocks=1 00:13:20.632 --rc geninfo_unexecuted_blocks=1 00:13:20.632 00:13:20.632 ' 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.632 --rc genhtml_branch_coverage=1 00:13:20.632 --rc genhtml_function_coverage=1 00:13:20.632 --rc genhtml_legend=1 00:13:20.632 --rc geninfo_all_blocks=1 00:13:20.632 --rc geninfo_unexecuted_blocks=1 00:13:20.632 00:13:20.632 ' 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.632 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.891 09:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.891 ************************************ 00:13:20.891 START TEST nvmf_example 00:13:20.891 ************************************ 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:20.892 * Looking for test storage... 00:13:20.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:20.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.892 --rc genhtml_branch_coverage=1 00:13:20.892 --rc genhtml_function_coverage=1 00:13:20.892 --rc genhtml_legend=1 00:13:20.892 --rc geninfo_all_blocks=1 00:13:20.892 --rc geninfo_unexecuted_blocks=1 00:13:20.892 00:13:20.892 ' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:20.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.892 --rc genhtml_branch_coverage=1 00:13:20.892 --rc genhtml_function_coverage=1 00:13:20.892 --rc genhtml_legend=1 00:13:20.892 --rc geninfo_all_blocks=1 00:13:20.892 --rc geninfo_unexecuted_blocks=1 00:13:20.892 00:13:20.892 ' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:20.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.892 --rc genhtml_branch_coverage=1 00:13:20.892 --rc genhtml_function_coverage=1 00:13:20.892 --rc genhtml_legend=1 00:13:20.892 --rc geninfo_all_blocks=1 00:13:20.892 --rc geninfo_unexecuted_blocks=1 00:13:20.892 00:13:20.892 ' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:20.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.892 --rc genhtml_branch_coverage=1 00:13:20.892 --rc genhtml_function_coverage=1 00:13:20.892 --rc genhtml_legend=1 00:13:20.892 --rc geninfo_all_blocks=1 00:13:20.892 --rc geninfo_unexecuted_blocks=1 00:13:20.892 00:13:20.892 ' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:20.892 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:21.150 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.150 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.150 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.150 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:21.151 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:21.151 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.151 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:23.687 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:23.687 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:23.687 Found net devices under 0000:84:00.0: cvl_0_0 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:23.687 Found net devices under 0000:84:00.1: cvl_0_1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.687 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:13:23.687 00:13:23.687 --- 10.0.0.2 ping statistics --- 00:13:23.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.688 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:23.688 00:13:23.688 --- 10.0.0.1 ping statistics --- 00:13:23.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.688 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1473577 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1473577 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1473577 ']' 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.688 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:24.257 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:36.454 Initializing NVMe Controllers 00:13:36.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:36.454 Initialization complete. Launching workers. 00:13:36.454 ======================================================== 00:13:36.454 Latency(us) 00:13:36.454 Device Information : IOPS MiB/s Average min max 00:13:36.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14760.29 57.66 4335.66 747.10 16945.47 00:13:36.454 ======================================================== 00:13:36.454 Total : 14760.29 57.66 4335.66 747.10 16945.47 00:13:36.454 00:13:36.454 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:36.454 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:36.454 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:36.454 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:36.454 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.455 rmmod nvme_tcp 00:13:36.455 rmmod nvme_fabrics 00:13:36.455 rmmod nvme_keyring 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1473577 ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1473577 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1473577 ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1473577 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473577 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473577' 00:13:36.455 killing process with pid 1473577 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1473577 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1473577 00:13:36.455 nvmf threads initialize successfully 00:13:36.455 bdev subsystem init successfully 00:13:36.455 created a nvmf target service 00:13:36.455 create targets's poll groups done 00:13:36.455 all subsystems of target started 00:13:36.455 nvmf target is running 00:13:36.455 all subsystems of target stopped 00:13:36.455 destroy targets's poll groups done 00:13:36.455 destroyed the nvmf target service 00:13:36.455 bdev subsystem finish successfully 00:13:36.455 nvmf threads destroy successfully 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.455 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:37.022 00:13:37.022 real 0m16.094s 00:13:37.022 user 0m42.452s 00:13:37.022 sys 0m4.173s 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:37.022 ************************************ 00:13:37.022 END TEST nvmf_example 00:13:37.022 ************************************ 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.022 ************************************ 00:13:37.022 START TEST nvmf_filesystem 00:13:37.022 ************************************ 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:37.022 * Looking for test storage... 00:13:37.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:37.022 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.284 --rc genhtml_branch_coverage=1 00:13:37.284 --rc genhtml_function_coverage=1 00:13:37.284 --rc genhtml_legend=1 00:13:37.284 --rc geninfo_all_blocks=1 00:13:37.284 --rc geninfo_unexecuted_blocks=1 00:13:37.284 00:13:37.284 ' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.284 --rc genhtml_branch_coverage=1 00:13:37.284 --rc genhtml_function_coverage=1 00:13:37.284 --rc genhtml_legend=1 00:13:37.284 --rc geninfo_all_blocks=1 00:13:37.284 --rc geninfo_unexecuted_blocks=1 00:13:37.284 00:13:37.284 ' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.284 --rc genhtml_branch_coverage=1 00:13:37.284 --rc genhtml_function_coverage=1 00:13:37.284 --rc genhtml_legend=1 00:13:37.284 --rc geninfo_all_blocks=1 00:13:37.284 --rc geninfo_unexecuted_blocks=1 00:13:37.284 00:13:37.284 ' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.284 --rc genhtml_branch_coverage=1 00:13:37.284 --rc genhtml_function_coverage=1 00:13:37.284 --rc genhtml_legend=1 00:13:37.284 --rc geninfo_all_blocks=1 00:13:37.284 --rc geninfo_unexecuted_blocks=1 00:13:37.284 00:13:37.284 ' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:37.284 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:37.285 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:37.285 #define SPDK_CONFIG_H 00:13:37.285 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:37.285 #define SPDK_CONFIG_APPS 1 00:13:37.285 #define SPDK_CONFIG_ARCH native 00:13:37.285 #undef SPDK_CONFIG_ASAN 00:13:37.285 #undef SPDK_CONFIG_AVAHI 00:13:37.285 #undef SPDK_CONFIG_CET 00:13:37.285 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:37.285 #define SPDK_CONFIG_COVERAGE 1 00:13:37.285 #define SPDK_CONFIG_CROSS_PREFIX 00:13:37.285 #undef SPDK_CONFIG_CRYPTO 00:13:37.285 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:37.285 #undef SPDK_CONFIG_CUSTOMOCF 00:13:37.285 #undef SPDK_CONFIG_DAOS 00:13:37.285 #define SPDK_CONFIG_DAOS_DIR 00:13:37.285 #define SPDK_CONFIG_DEBUG 1 00:13:37.285 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:37.285 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:37.285 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:37.285 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:37.285 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:37.285 #undef SPDK_CONFIG_DPDK_UADK 00:13:37.285 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:37.285 #define SPDK_CONFIG_EXAMPLES 1 00:13:37.285 #undef SPDK_CONFIG_FC 00:13:37.285 #define SPDK_CONFIG_FC_PATH 00:13:37.285 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:37.285 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:37.285 #define SPDK_CONFIG_FSDEV 1 00:13:37.285 #undef SPDK_CONFIG_FUSE 00:13:37.285 #undef SPDK_CONFIG_FUZZER 00:13:37.285 #define SPDK_CONFIG_FUZZER_LIB 00:13:37.285 #undef SPDK_CONFIG_GOLANG 00:13:37.285 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:37.285 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:37.285 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:37.285 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:37.285 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:37.285 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:37.285 #undef SPDK_CONFIG_HAVE_LZ4 00:13:37.285 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:37.285 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:37.285 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:37.285 #define SPDK_CONFIG_IDXD 1 00:13:37.285 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:37.285 #undef SPDK_CONFIG_IPSEC_MB 00:13:37.285 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:37.285 #define SPDK_CONFIG_ISAL 1 00:13:37.285 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:37.285 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:37.285 #define SPDK_CONFIG_LIBDIR 00:13:37.285 #undef SPDK_CONFIG_LTO 00:13:37.285 #define SPDK_CONFIG_MAX_LCORES 128 00:13:37.285 #define SPDK_CONFIG_NVME_CUSE 1 00:13:37.286 #undef SPDK_CONFIG_OCF 00:13:37.286 #define SPDK_CONFIG_OCF_PATH 00:13:37.286 #define SPDK_CONFIG_OPENSSL_PATH 00:13:37.286 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:37.286 #define SPDK_CONFIG_PGO_DIR 00:13:37.286 #undef SPDK_CONFIG_PGO_USE 00:13:37.286 #define SPDK_CONFIG_PREFIX /usr/local 00:13:37.286 #undef SPDK_CONFIG_RAID5F 00:13:37.286 #undef SPDK_CONFIG_RBD 00:13:37.286 #define SPDK_CONFIG_RDMA 1 00:13:37.286 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:37.286 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:37.286 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:37.286 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:37.286 #define SPDK_CONFIG_SHARED 1 00:13:37.286 #undef SPDK_CONFIG_SMA 00:13:37.286 #define SPDK_CONFIG_TESTS 1 00:13:37.286 #undef SPDK_CONFIG_TSAN 00:13:37.286 #define SPDK_CONFIG_UBLK 1 00:13:37.286 #define SPDK_CONFIG_UBSAN 1 00:13:37.286 #undef SPDK_CONFIG_UNIT_TESTS 00:13:37.286 #undef SPDK_CONFIG_URING 00:13:37.286 #define SPDK_CONFIG_URING_PATH 00:13:37.286 #undef SPDK_CONFIG_URING_ZNS 00:13:37.286 #undef SPDK_CONFIG_USDT 00:13:37.286 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:37.286 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:37.286 #define SPDK_CONFIG_VFIO_USER 1 00:13:37.286 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:37.286 #define SPDK_CONFIG_VHOST 1 00:13:37.286 #define SPDK_CONFIG_VIRTIO 1 00:13:37.286 #undef SPDK_CONFIG_VTUNE 00:13:37.286 #define SPDK_CONFIG_VTUNE_DIR 00:13:37.286 #define SPDK_CONFIG_WERROR 1 00:13:37.286 #define SPDK_CONFIG_WPDK_DIR 00:13:37.286 #undef SPDK_CONFIG_XNVME 00:13:37.286 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:37.286 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:37.287 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:37.288 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1475172 ]] 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1475172 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:37.288 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.DzjAp0 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DzjAp0/tests/target /tmp/spdk.DzjAp0 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=660762624 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623667200 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=38333726720 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=45077094400 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6743367680 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22528516096 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538547200 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=8992956416 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9015418880 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22462464 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22538055680 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538547200 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=491520 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4507697152 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4507709440 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:37.289 * Looking for test storage... 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=38333726720 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8957960192 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:37.289 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:37.290 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.549 --rc genhtml_branch_coverage=1 00:13:37.549 --rc genhtml_function_coverage=1 00:13:37.549 --rc genhtml_legend=1 00:13:37.549 --rc geninfo_all_blocks=1 00:13:37.549 --rc geninfo_unexecuted_blocks=1 00:13:37.549 00:13:37.549 ' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.549 --rc genhtml_branch_coverage=1 00:13:37.549 --rc genhtml_function_coverage=1 00:13:37.549 --rc genhtml_legend=1 00:13:37.549 --rc geninfo_all_blocks=1 00:13:37.549 --rc geninfo_unexecuted_blocks=1 00:13:37.549 00:13:37.549 ' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.549 --rc genhtml_branch_coverage=1 00:13:37.549 --rc genhtml_function_coverage=1 00:13:37.549 --rc genhtml_legend=1 00:13:37.549 --rc geninfo_all_blocks=1 00:13:37.549 --rc geninfo_unexecuted_blocks=1 00:13:37.549 00:13:37.549 ' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.549 --rc genhtml_branch_coverage=1 00:13:37.549 --rc genhtml_function_coverage=1 00:13:37.549 --rc genhtml_legend=1 00:13:37.549 --rc geninfo_all_blocks=1 00:13:37.549 --rc geninfo_unexecuted_blocks=1 00:13:37.549 00:13:37.549 ' 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:37.549 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.550 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.082 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:40.083 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:40.083 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:40.083 Found net devices under 0000:84:00.0: cvl_0_0 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:40.083 Found net devices under 0000:84:00.1: cvl_0_1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.083 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:13:40.342 00:13:40.342 --- 10.0.0.2 ping statistics --- 00:13:40.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.342 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:13:40.342 00:13:40.342 --- 10.0.0.1 ping statistics --- 00:13:40.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.342 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.342 ************************************ 00:13:40.342 START TEST nvmf_filesystem_no_in_capsule 00:13:40.342 ************************************ 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1476955 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1476955 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1476955 ']' 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.342 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.342 [2024-10-07 09:34:35.028216] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:13:40.342 [2024-10-07 09:34:35.028324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.342 [2024-10-07 09:34:35.107187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.600 [2024-10-07 09:34:35.229094] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.600 [2024-10-07 09:34:35.229172] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.600 [2024-10-07 09:34:35.229189] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.601 [2024-10-07 09:34:35.229203] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.601 [2024-10-07 09:34:35.229215] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.601 [2024-10-07 09:34:35.231135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.601 [2024-10-07 09:34:35.231200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.601 [2024-10-07 09:34:35.231239] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.601 [2024-10-07 09:34:35.231242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.861 [2024-10-07 09:34:35.561218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.861 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.152 Malloc1 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.152 [2024-10-07 09:34:35.751161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.152 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:41.152 { 00:13:41.152 "name": "Malloc1", 00:13:41.152 "aliases": [ 00:13:41.152 "b2650bf8-30dc-4f9f-ba13-5de2fbad3c0e" 00:13:41.152 ], 00:13:41.152 "product_name": "Malloc disk", 00:13:41.152 "block_size": 512, 00:13:41.152 "num_blocks": 1048576, 00:13:41.152 "uuid": "b2650bf8-30dc-4f9f-ba13-5de2fbad3c0e", 00:13:41.152 "assigned_rate_limits": { 00:13:41.153 "rw_ios_per_sec": 0, 00:13:41.153 "rw_mbytes_per_sec": 0, 00:13:41.153 "r_mbytes_per_sec": 0, 00:13:41.153 "w_mbytes_per_sec": 0 00:13:41.153 }, 00:13:41.153 "claimed": true, 00:13:41.153 "claim_type": "exclusive_write", 00:13:41.153 "zoned": false, 00:13:41.153 "supported_io_types": { 00:13:41.153 "read": true, 00:13:41.153 "write": true, 00:13:41.153 "unmap": true, 00:13:41.153 "flush": true, 00:13:41.153 "reset": true, 00:13:41.153 "nvme_admin": false, 00:13:41.153 "nvme_io": false, 00:13:41.153 "nvme_io_md": false, 00:13:41.153 "write_zeroes": true, 00:13:41.153 "zcopy": true, 00:13:41.153 "get_zone_info": false, 00:13:41.153 "zone_management": false, 00:13:41.153 "zone_append": false, 00:13:41.153 "compare": false, 00:13:41.153 "compare_and_write": false, 00:13:41.153 "abort": true, 00:13:41.153 "seek_hole": false, 00:13:41.153 "seek_data": false, 00:13:41.153 "copy": true, 00:13:41.153 "nvme_iov_md": false 00:13:41.153 }, 00:13:41.153 "memory_domains": [ 00:13:41.153 { 00:13:41.153 "dma_device_id": "system", 00:13:41.153 "dma_device_type": 1 00:13:41.153 }, 00:13:41.153 { 00:13:41.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.153 "dma_device_type": 2 00:13:41.153 } 00:13:41.153 ], 00:13:41.153 "driver_specific": {} 00:13:41.153 } 00:13:41.153 ]' 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:41.153 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.744 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.744 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:41.744 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.744 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:41.744 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:44.293 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:44.859 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.793 ************************************ 00:13:45.793 START TEST filesystem_ext4 00:13:45.793 ************************************ 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:45.793 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:45.793 mke2fs 1.47.0 (5-Feb-2023) 00:13:46.051 Discarding device blocks: 0/522240 done 00:13:46.051 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:46.051 Filesystem UUID: cb192416-794e-48e8-b266-cc4c472eff47 00:13:46.051 Superblock backups stored on blocks: 00:13:46.051 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:46.051 00:13:46.051 Allocating group tables: 0/64 done 00:13:46.051 Writing inode tables: 0/64 done 00:13:49.334 Creating journal (8192 blocks): done 00:13:49.334 Writing superblocks and filesystem accounting information: 0/64 done 00:13:49.334 00:13:49.334 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:49.334 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1476955 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:54.601 00:13:54.601 real 0m8.801s 00:13:54.601 user 0m0.015s 00:13:54.601 sys 0m0.072s 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:54.601 ************************************ 00:13:54.601 END TEST filesystem_ext4 00:13:54.601 ************************************ 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.601 ************************************ 00:13:54.601 START TEST filesystem_btrfs 00:13:54.601 ************************************ 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:54.601 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:54.859 btrfs-progs v6.8.1 00:13:54.859 See https://btrfs.readthedocs.io for more information. 00:13:54.859 00:13:54.859 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:54.859 NOTE: several default settings have changed in version 5.15, please make sure 00:13:54.859 this does not affect your deployments: 00:13:54.859 - DUP for metadata (-m dup) 00:13:54.859 - enabled no-holes (-O no-holes) 00:13:54.859 - enabled free-space-tree (-R free-space-tree) 00:13:54.859 00:13:54.859 Label: (null) 00:13:54.859 UUID: 3c11fa0f-4f1a-49b2-939d-7bd426bdcfe8 00:13:54.859 Node size: 16384 00:13:54.859 Sector size: 4096 (CPU page size: 4096) 00:13:54.860 Filesystem size: 510.00MiB 00:13:54.860 Block group profiles: 00:13:54.860 Data: single 8.00MiB 00:13:54.860 Metadata: DUP 32.00MiB 00:13:54.860 System: DUP 8.00MiB 00:13:54.860 SSD detected: yes 00:13:54.860 Zoned device: no 00:13:54.860 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:54.860 Checksum: crc32c 00:13:54.860 Number of devices: 1 00:13:54.860 Devices: 00:13:54.860 ID SIZE PATH 00:13:54.860 1 510.00MiB /dev/nvme0n1p1 00:13:54.860 00:13:54.860 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:54.860 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1476955 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:55.118 00:13:55.118 real 0m0.515s 00:13:55.118 user 0m0.027s 00:13:55.118 sys 0m0.099s 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:55.118 ************************************ 00:13:55.118 END TEST filesystem_btrfs 00:13:55.118 ************************************ 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.118 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.377 ************************************ 00:13:55.377 START TEST filesystem_xfs 00:13:55.377 ************************************ 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:55.377 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:55.377 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:55.377 = sectsz=512 attr=2, projid32bit=1 00:13:55.377 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:55.377 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:55.377 data = bsize=4096 blocks=130560, imaxpct=25 00:13:55.377 = sunit=0 swidth=0 blks 00:13:55.377 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:55.377 log =internal log bsize=4096 blocks=16384, version=2 00:13:55.377 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:55.377 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:56.316 Discarding blocks...Done. 00:13:56.316 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:56.316 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1476955 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.213 00:13:58.213 real 0m2.743s 00:13:58.213 user 0m0.014s 00:13:58.213 sys 0m0.066s 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:58.213 ************************************ 00:13:58.213 END TEST filesystem_xfs 00:13:58.213 ************************************ 00:13:58.213 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:58.213 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:58.213 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.470 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1476955 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1476955 ']' 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1476955 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476955 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476955' 00:13:58.471 killing process with pid 1476955 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1476955 00:13:58.471 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1476955 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:59.035 00:13:59.035 real 0m18.684s 00:13:59.035 user 1m12.287s 00:13:59.035 sys 0m2.328s 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.035 ************************************ 00:13:59.035 END TEST nvmf_filesystem_no_in_capsule 00:13:59.035 ************************************ 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.035 ************************************ 00:13:59.035 START TEST nvmf_filesystem_in_capsule 00:13:59.035 ************************************ 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:59.035 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1479323 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1479323 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1479323 ']' 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.036 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.036 [2024-10-07 09:34:53.765372] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:13:59.036 [2024-10-07 09:34:53.765466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.036 [2024-10-07 09:34:53.840967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.293 [2024-10-07 09:34:53.964304] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.293 [2024-10-07 09:34:53.964371] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.293 [2024-10-07 09:34:53.964387] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.293 [2024-10-07 09:34:53.964409] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.293 [2024-10-07 09:34:53.964422] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.293 [2024-10-07 09:34:53.966344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.293 [2024-10-07 09:34:53.966388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.293 [2024-10-07 09:34:53.966505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.293 [2024-10-07 09:34:53.966508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.293 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.293 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:59.293 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:59.293 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.293 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 [2024-10-07 09:34:54.138848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 [2024-10-07 09:34:54.332251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.551 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:59.551 { 00:13:59.551 "name": "Malloc1", 00:13:59.551 "aliases": [ 00:13:59.551 "8b8a2972-effe-4fc1-9e3e-31117ee63b2b" 00:13:59.551 ], 00:13:59.551 "product_name": "Malloc disk", 00:13:59.551 "block_size": 512, 00:13:59.551 "num_blocks": 1048576, 00:13:59.551 "uuid": "8b8a2972-effe-4fc1-9e3e-31117ee63b2b", 00:13:59.551 "assigned_rate_limits": { 00:13:59.551 "rw_ios_per_sec": 0, 00:13:59.551 "rw_mbytes_per_sec": 0, 00:13:59.551 "r_mbytes_per_sec": 0, 00:13:59.551 "w_mbytes_per_sec": 0 00:13:59.551 }, 00:13:59.551 "claimed": true, 00:13:59.551 "claim_type": "exclusive_write", 00:13:59.551 "zoned": false, 00:13:59.551 "supported_io_types": { 00:13:59.551 "read": true, 00:13:59.551 "write": true, 00:13:59.551 "unmap": true, 00:13:59.551 "flush": true, 00:13:59.551 "reset": true, 00:13:59.551 "nvme_admin": false, 00:13:59.551 "nvme_io": false, 00:13:59.551 "nvme_io_md": false, 00:13:59.551 "write_zeroes": true, 00:13:59.551 "zcopy": true, 00:13:59.552 "get_zone_info": false, 00:13:59.552 "zone_management": false, 00:13:59.552 "zone_append": false, 00:13:59.552 "compare": false, 00:13:59.552 "compare_and_write": false, 00:13:59.552 "abort": true, 00:13:59.552 "seek_hole": false, 00:13:59.552 "seek_data": false, 00:13:59.552 "copy": true, 00:13:59.552 "nvme_iov_md": false 00:13:59.552 }, 00:13:59.552 "memory_domains": [ 00:13:59.552 { 00:13:59.552 "dma_device_id": "system", 00:13:59.552 "dma_device_type": 1 00:13:59.552 }, 00:13:59.552 { 00:13:59.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.552 "dma_device_type": 2 00:13:59.552 } 00:13:59.552 ], 00:13:59.552 "driver_specific": {} 00:13:59.552 } 00:13:59.552 ]' 00:13:59.552 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:59.810 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.377 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.377 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.377 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.377 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:00.377 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:02.906 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:03.840 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.776 ************************************ 00:14:04.776 START TEST filesystem_in_capsule_ext4 00:14:04.776 ************************************ 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:04.776 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:04.776 mke2fs 1.47.0 (5-Feb-2023) 00:14:04.776 Discarding device blocks: 0/522240 done 00:14:04.776 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:04.776 Filesystem UUID: 6b154f32-5c7e-4237-a464-e4a90b0a337e 00:14:04.776 Superblock backups stored on blocks: 00:14:04.776 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:04.776 00:14:04.776 Allocating group tables: 0/64 done 00:14:04.776 Writing inode tables: 0/64 done 00:14:05.035 Creating journal (8192 blocks): done 00:14:05.035 Writing superblocks and filesystem accounting information: 0/64 done 00:14:05.035 00:14:05.035 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:05.035 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1479323 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:11.597 00:14:11.597 real 0m6.037s 00:14:11.597 user 0m0.017s 00:14:11.597 sys 0m0.076s 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:11.597 ************************************ 00:14:11.597 END TEST filesystem_in_capsule_ext4 00:14:11.597 ************************************ 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.597 ************************************ 00:14:11.597 START TEST filesystem_in_capsule_btrfs 00:14:11.597 ************************************ 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:11.597 btrfs-progs v6.8.1 00:14:11.597 See https://btrfs.readthedocs.io for more information. 00:14:11.597 00:14:11.597 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:11.597 NOTE: several default settings have changed in version 5.15, please make sure 00:14:11.597 this does not affect your deployments: 00:14:11.597 - DUP for metadata (-m dup) 00:14:11.597 - enabled no-holes (-O no-holes) 00:14:11.597 - enabled free-space-tree (-R free-space-tree) 00:14:11.597 00:14:11.597 Label: (null) 00:14:11.597 UUID: 53988cbf-c2ca-4152-af7e-07886ba4bd10 00:14:11.597 Node size: 16384 00:14:11.597 Sector size: 4096 (CPU page size: 4096) 00:14:11.597 Filesystem size: 510.00MiB 00:14:11.597 Block group profiles: 00:14:11.597 Data: single 8.00MiB 00:14:11.597 Metadata: DUP 32.00MiB 00:14:11.597 System: DUP 8.00MiB 00:14:11.597 SSD detected: yes 00:14:11.597 Zoned device: no 00:14:11.597 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:11.597 Checksum: crc32c 00:14:11.597 Number of devices: 1 00:14:11.597 Devices: 00:14:11.597 ID SIZE PATH 00:14:11.597 1 510.00MiB /dev/nvme0n1p1 00:14:11.597 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:11.597 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1479323 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:11.597 00:14:11.597 real 0m0.701s 00:14:11.597 user 0m0.018s 00:14:11.597 sys 0m0.103s 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:11.597 ************************************ 00:14:11.597 END TEST filesystem_in_capsule_btrfs 00:14:11.597 ************************************ 00:14:11.597 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.598 ************************************ 00:14:11.598 START TEST filesystem_in_capsule_xfs 00:14:11.598 ************************************ 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:11.598 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:11.857 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:11.857 = sectsz=512 attr=2, projid32bit=1 00:14:11.857 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:11.857 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:11.857 data = bsize=4096 blocks=130560, imaxpct=25 00:14:11.857 = sunit=0 swidth=0 blks 00:14:11.857 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:11.857 log =internal log bsize=4096 blocks=16384, version=2 00:14:11.857 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:11.857 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:12.424 Discarding blocks...Done. 00:14:12.424 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:12.424 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:14.953 00:14:14.953 real 0m3.175s 00:14:14.953 user 0m0.026s 00:14:14.953 sys 0m0.070s 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:14.953 ************************************ 00:14:14.953 END TEST filesystem_in_capsule_xfs 00:14:14.953 ************************************ 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1479323 ']' 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1479323' 00:14:14.953 killing process with pid 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1479323 00:14:14.953 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1479323 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:15.521 00:14:15.521 real 0m16.481s 00:14:15.521 user 1m3.437s 00:14:15.521 sys 0m2.278s 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.521 ************************************ 00:14:15.521 END TEST nvmf_filesystem_in_capsule 00:14:15.521 ************************************ 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:15.521 rmmod nvme_tcp 00:14:15.521 rmmod nvme_fabrics 00:14:15.521 rmmod nvme_keyring 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.521 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:18.063 00:14:18.063 real 0m40.659s 00:14:18.063 user 2m16.953s 00:14:18.063 sys 0m6.879s 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.063 ************************************ 00:14:18.063 END TEST nvmf_filesystem 00:14:18.063 ************************************ 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.063 ************************************ 00:14:18.063 START TEST nvmf_target_discovery 00:14:18.063 ************************************ 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:18.063 * Looking for test storage... 00:14:18.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:18.063 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.064 --rc genhtml_branch_coverage=1 00:14:18.064 --rc genhtml_function_coverage=1 00:14:18.064 --rc genhtml_legend=1 00:14:18.064 --rc geninfo_all_blocks=1 00:14:18.064 --rc geninfo_unexecuted_blocks=1 00:14:18.064 00:14:18.064 ' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.064 --rc genhtml_branch_coverage=1 00:14:18.064 --rc genhtml_function_coverage=1 00:14:18.064 --rc genhtml_legend=1 00:14:18.064 --rc geninfo_all_blocks=1 00:14:18.064 --rc geninfo_unexecuted_blocks=1 00:14:18.064 00:14:18.064 ' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.064 --rc genhtml_branch_coverage=1 00:14:18.064 --rc genhtml_function_coverage=1 00:14:18.064 --rc genhtml_legend=1 00:14:18.064 --rc geninfo_all_blocks=1 00:14:18.064 --rc geninfo_unexecuted_blocks=1 00:14:18.064 00:14:18.064 ' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.064 --rc genhtml_branch_coverage=1 00:14:18.064 --rc genhtml_function_coverage=1 00:14:18.064 --rc genhtml_legend=1 00:14:18.064 --rc geninfo_all_blocks=1 00:14:18.064 --rc geninfo_unexecuted_blocks=1 00:14:18.064 00:14:18.064 ' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.064 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:18.065 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:20.672 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:20.673 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:20.673 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:20.673 Found net devices under 0000:84:00.0: cvl_0_0 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:20.673 Found net devices under 0000:84:00.1: cvl_0_1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:20.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:14:20.673 00:14:20.673 --- 10.0.0.2 ping statistics --- 00:14:20.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.673 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:20.673 00:14:20.673 --- 10.0.0.1 ping statistics --- 00:14:20.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.673 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1483366 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1483366 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1483366 ']' 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.673 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.673 [2024-10-07 09:35:15.377499] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:14:20.673 [2024-10-07 09:35:15.377602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.673 [2024-10-07 09:35:15.462525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.931 [2024-10-07 09:35:15.586475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.931 [2024-10-07 09:35:15.586552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.931 [2024-10-07 09:35:15.586570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.931 [2024-10-07 09:35:15.586584] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.931 [2024-10-07 09:35:15.586596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.931 [2024-10-07 09:35:15.588602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.931 [2024-10-07 09:35:15.588700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.931 [2024-10-07 09:35:15.588760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.931 [2024-10-07 09:35:15.588762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.190 [2024-10-07 09:35:15.915523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.190 Null1 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.190 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 [2024-10-07 09:35:15.955819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 Null2 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.191 Null3 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.191 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.448 Null4 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:21.448 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.449 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:21.706 00:14:21.706 Discovery Log Number of Records 6, Generation counter 6 00:14:21.706 =====Discovery Log Entry 0====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: current discovery subsystem 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4420 00:14:21.707 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: explicit discovery connections, duplicate discovery information 00:14:21.707 sectype: none 00:14:21.707 =====Discovery Log Entry 1====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: nvme subsystem 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4420 00:14:21.707 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: none 00:14:21.707 sectype: none 00:14:21.707 =====Discovery Log Entry 2====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: nvme subsystem 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4420 00:14:21.707 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: none 00:14:21.707 sectype: none 00:14:21.707 =====Discovery Log Entry 3====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: nvme subsystem 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4420 00:14:21.707 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: none 00:14:21.707 sectype: none 00:14:21.707 =====Discovery Log Entry 4====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: nvme subsystem 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4420 00:14:21.707 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: none 00:14:21.707 sectype: none 00:14:21.707 =====Discovery Log Entry 5====== 00:14:21.707 trtype: tcp 00:14:21.707 adrfam: ipv4 00:14:21.707 subtype: discovery subsystem referral 00:14:21.707 treq: not required 00:14:21.707 portid: 0 00:14:21.707 trsvcid: 4430 00:14:21.707 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:21.707 traddr: 10.0.0.2 00:14:21.707 eflags: none 00:14:21.707 sectype: none 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:21.707 Perform nvmf subsystem discovery via RPC 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 [ 00:14:21.707 { 00:14:21.707 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:21.707 "subtype": "Discovery", 00:14:21.707 "listen_addresses": [ 00:14:21.707 { 00:14:21.707 "trtype": "TCP", 00:14:21.707 "adrfam": "IPv4", 00:14:21.707 "traddr": "10.0.0.2", 00:14:21.707 "trsvcid": "4420" 00:14:21.707 } 00:14:21.707 ], 00:14:21.707 "allow_any_host": true, 00:14:21.707 "hosts": [] 00:14:21.707 }, 00:14:21.707 { 00:14:21.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.707 "subtype": "NVMe", 00:14:21.707 "listen_addresses": [ 00:14:21.707 { 00:14:21.707 "trtype": "TCP", 00:14:21.707 "adrfam": "IPv4", 00:14:21.707 "traddr": "10.0.0.2", 00:14:21.707 "trsvcid": "4420" 00:14:21.707 } 00:14:21.707 ], 00:14:21.707 "allow_any_host": true, 00:14:21.707 "hosts": [], 00:14:21.707 "serial_number": "SPDK00000000000001", 00:14:21.707 "model_number": "SPDK bdev Controller", 00:14:21.707 "max_namespaces": 32, 00:14:21.707 "min_cntlid": 1, 00:14:21.707 "max_cntlid": 65519, 00:14:21.707 "namespaces": [ 00:14:21.707 { 00:14:21.707 "nsid": 1, 00:14:21.707 "bdev_name": "Null1", 00:14:21.707 "name": "Null1", 00:14:21.707 "nguid": "81FAF61660C24A96A242DC9A72ECC9C8", 00:14:21.707 "uuid": "81faf616-60c2-4a96-a242-dc9a72ecc9c8" 00:14:21.707 } 00:14:21.707 ] 00:14:21.707 }, 00:14:21.707 { 00:14:21.707 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:21.707 "subtype": "NVMe", 00:14:21.707 "listen_addresses": [ 00:14:21.707 { 00:14:21.707 "trtype": "TCP", 00:14:21.707 "adrfam": "IPv4", 00:14:21.707 "traddr": "10.0.0.2", 00:14:21.707 "trsvcid": "4420" 00:14:21.707 } 00:14:21.707 ], 00:14:21.707 "allow_any_host": true, 00:14:21.707 "hosts": [], 00:14:21.707 "serial_number": "SPDK00000000000002", 00:14:21.707 "model_number": "SPDK bdev Controller", 00:14:21.707 "max_namespaces": 32, 00:14:21.707 "min_cntlid": 1, 00:14:21.707 "max_cntlid": 65519, 00:14:21.707 "namespaces": [ 00:14:21.707 { 00:14:21.707 "nsid": 1, 00:14:21.707 "bdev_name": "Null2", 00:14:21.707 "name": "Null2", 00:14:21.707 "nguid": "8E3C5BB260D24CE0B4130559BF1BEA51", 00:14:21.707 "uuid": "8e3c5bb2-60d2-4ce0-b413-0559bf1bea51" 00:14:21.707 } 00:14:21.707 ] 00:14:21.707 }, 00:14:21.707 { 00:14:21.707 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:21.707 "subtype": "NVMe", 00:14:21.707 "listen_addresses": [ 00:14:21.707 { 00:14:21.707 "trtype": "TCP", 00:14:21.707 "adrfam": "IPv4", 00:14:21.707 "traddr": "10.0.0.2", 00:14:21.707 "trsvcid": "4420" 00:14:21.707 } 00:14:21.707 ], 00:14:21.707 "allow_any_host": true, 00:14:21.707 "hosts": [], 00:14:21.707 "serial_number": "SPDK00000000000003", 00:14:21.707 "model_number": "SPDK bdev Controller", 00:14:21.707 "max_namespaces": 32, 00:14:21.707 "min_cntlid": 1, 00:14:21.707 "max_cntlid": 65519, 00:14:21.707 "namespaces": [ 00:14:21.707 { 00:14:21.707 "nsid": 1, 00:14:21.707 "bdev_name": "Null3", 00:14:21.707 "name": "Null3", 00:14:21.707 "nguid": "ED1CDD529FCB476A9A3902AD76B662CD", 00:14:21.707 "uuid": "ed1cdd52-9fcb-476a-9a39-02ad76b662cd" 00:14:21.707 } 00:14:21.707 ] 00:14:21.707 }, 00:14:21.707 { 00:14:21.707 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:21.707 "subtype": "NVMe", 00:14:21.707 "listen_addresses": [ 00:14:21.707 { 00:14:21.707 "trtype": "TCP", 00:14:21.707 "adrfam": "IPv4", 00:14:21.707 "traddr": "10.0.0.2", 00:14:21.707 "trsvcid": "4420" 00:14:21.707 } 00:14:21.707 ], 00:14:21.707 "allow_any_host": true, 00:14:21.707 "hosts": [], 00:14:21.707 "serial_number": "SPDK00000000000004", 00:14:21.707 "model_number": "SPDK bdev Controller", 00:14:21.707 "max_namespaces": 32, 00:14:21.707 "min_cntlid": 1, 00:14:21.707 "max_cntlid": 65519, 00:14:21.707 "namespaces": [ 00:14:21.707 { 00:14:21.707 "nsid": 1, 00:14:21.707 "bdev_name": "Null4", 00:14:21.707 "name": "Null4", 00:14:21.707 "nguid": "30933AF763244AD6B4683B5C70492337", 00:14:21.707 "uuid": "30933af7-6324-4ad6-b468-3b5c70492337" 00:14:21.707 } 00:14:21.707 ] 00:14:21.707 } 00:14:21.707 ] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.707 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.708 rmmod nvme_tcp 00:14:21.708 rmmod nvme_fabrics 00:14:21.708 rmmod nvme_keyring 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1483366 ']' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1483366 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1483366 ']' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1483366 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.708 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483366 00:14:21.965 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:21.965 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:21.965 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483366' 00:14:21.965 killing process with pid 1483366 00:14:21.965 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1483366 00:14:21.965 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1483366 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.225 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.132 00:14:24.132 real 0m6.496s 00:14:24.132 user 0m6.110s 00:14:24.132 sys 0m2.369s 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.132 ************************************ 00:14:24.132 END TEST nvmf_target_discovery 00:14:24.132 ************************************ 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.132 ************************************ 00:14:24.132 START TEST nvmf_referrals 00:14:24.132 ************************************ 00:14:24.132 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:24.393 * Looking for test storage... 00:14:24.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:24.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.393 --rc genhtml_branch_coverage=1 00:14:24.393 --rc genhtml_function_coverage=1 00:14:24.393 --rc genhtml_legend=1 00:14:24.393 --rc geninfo_all_blocks=1 00:14:24.393 --rc geninfo_unexecuted_blocks=1 00:14:24.393 00:14:24.393 ' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:24.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.393 --rc genhtml_branch_coverage=1 00:14:24.393 --rc genhtml_function_coverage=1 00:14:24.393 --rc genhtml_legend=1 00:14:24.393 --rc geninfo_all_blocks=1 00:14:24.393 --rc geninfo_unexecuted_blocks=1 00:14:24.393 00:14:24.393 ' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:24.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.393 --rc genhtml_branch_coverage=1 00:14:24.393 --rc genhtml_function_coverage=1 00:14:24.393 --rc genhtml_legend=1 00:14:24.393 --rc geninfo_all_blocks=1 00:14:24.393 --rc geninfo_unexecuted_blocks=1 00:14:24.393 00:14:24.393 ' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:24.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.393 --rc genhtml_branch_coverage=1 00:14:24.393 --rc genhtml_function_coverage=1 00:14:24.393 --rc genhtml_legend=1 00:14:24.393 --rc geninfo_all_blocks=1 00:14:24.393 --rc geninfo_unexecuted_blocks=1 00:14:24.393 00:14:24.393 ' 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:24.393 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.394 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:26.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:26.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.928 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:26.929 Found net devices under 0000:84:00.0: cvl_0_0 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:26.929 Found net devices under 0000:84:00.1: cvl_0_1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:14:26.929 00:14:26.929 --- 10.0.0.2 ping statistics --- 00:14:26.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.929 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:26.929 00:14:26.929 --- 10.0.0.1 ping statistics --- 00:14:26.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.929 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:26.929 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1485602 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1485602 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1485602 ']' 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.188 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.188 [2024-10-07 09:35:21.808201] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:14:27.188 [2024-10-07 09:35:21.808310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.188 [2024-10-07 09:35:21.884600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.446 [2024-10-07 09:35:22.005788] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.446 [2024-10-07 09:35:22.005864] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.446 [2024-10-07 09:35:22.005903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.446 [2024-10-07 09:35:22.005919] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.446 [2024-10-07 09:35:22.005929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.446 [2024-10-07 09:35:22.008009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.446 [2024-10-07 09:35:22.008037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.446 [2024-10-07 09:35:22.008093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.446 [2024-10-07 09:35:22.008096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.446 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.446 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:27.446 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:27.446 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.446 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 [2024-10-07 09:35:22.266074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 [2024-10-07 09:35:22.278316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.704 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:27.961 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.962 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:28.220 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.221 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:28.221 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.478 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:28.737 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:29.066 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:29.324 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:29.582 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:29.839 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:29.840 rmmod nvme_tcp 00:14:29.840 rmmod nvme_fabrics 00:14:29.840 rmmod nvme_keyring 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1485602 ']' 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1485602 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1485602 ']' 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1485602 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1485602 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1485602' 00:14:29.840 killing process with pid 1485602 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1485602 00:14:29.840 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1485602 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.098 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.631 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:32.631 00:14:32.631 real 0m7.966s 00:14:32.631 user 0m13.163s 00:14:32.631 sys 0m2.760s 00:14:32.631 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.631 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:32.631 ************************************ 00:14:32.631 END TEST nvmf_referrals 00:14:32.631 ************************************ 00:14:32.631 09:35:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:32.632 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.632 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.632 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.632 ************************************ 00:14:32.632 START TEST nvmf_connect_disconnect 00:14:32.632 ************************************ 00:14:32.632 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:32.632 * Looking for test storage... 00:14:32.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.632 --rc genhtml_branch_coverage=1 00:14:32.632 --rc genhtml_function_coverage=1 00:14:32.632 --rc genhtml_legend=1 00:14:32.632 --rc geninfo_all_blocks=1 00:14:32.632 --rc geninfo_unexecuted_blocks=1 00:14:32.632 00:14:32.632 ' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.632 --rc genhtml_branch_coverage=1 00:14:32.632 --rc genhtml_function_coverage=1 00:14:32.632 --rc genhtml_legend=1 00:14:32.632 --rc geninfo_all_blocks=1 00:14:32.632 --rc geninfo_unexecuted_blocks=1 00:14:32.632 00:14:32.632 ' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.632 --rc genhtml_branch_coverage=1 00:14:32.632 --rc genhtml_function_coverage=1 00:14:32.632 --rc genhtml_legend=1 00:14:32.632 --rc geninfo_all_blocks=1 00:14:32.632 --rc geninfo_unexecuted_blocks=1 00:14:32.632 00:14:32.632 ' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.632 --rc genhtml_branch_coverage=1 00:14:32.632 --rc genhtml_function_coverage=1 00:14:32.632 --rc genhtml_legend=1 00:14:32.632 --rc geninfo_all_blocks=1 00:14:32.632 --rc geninfo_unexecuted_blocks=1 00:14:32.632 00:14:32.632 ' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:32.632 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:32.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:32.633 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:35.162 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:35.162 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:35.162 Found net devices under 0000:84:00.0: cvl_0_0 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:35.162 Found net devices under 0000:84:00.1: cvl_0_1 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.162 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:14:35.163 00:14:35.163 --- 10.0.0.2 ping statistics --- 00:14:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.163 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:14:35.163 00:14:35.163 --- 10.0.0.1 ping statistics --- 00:14:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.163 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1488047 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1488047 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1488047 ']' 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.163 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.163 [2024-10-07 09:35:29.876781] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:14:35.163 [2024-10-07 09:35:29.876864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.163 [2024-10-07 09:35:29.950683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.420 [2024-10-07 09:35:30.074805] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.420 [2024-10-07 09:35:30.074873] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.420 [2024-10-07 09:35:30.074897] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.420 [2024-10-07 09:35:30.074913] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.420 [2024-10-07 09:35:30.074926] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.420 [2024-10-07 09:35:30.076749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.420 [2024-10-07 09:35:30.076815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.420 [2024-10-07 09:35:30.076916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.420 [2024-10-07 09:35:30.076920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.420 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.420 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:35.420 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:35.420 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.420 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 [2024-10-07 09:35:30.252635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:35.678 [2024-10-07 09:35:30.314756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:35.678 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:38.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:49.816 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:49.817 rmmod nvme_tcp 00:14:49.817 rmmod nvme_fabrics 00:14:49.817 rmmod nvme_keyring 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1488047 ']' 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1488047 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1488047 ']' 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1488047 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1488047 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1488047' 00:14:49.817 killing process with pid 1488047 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1488047 00:14:49.817 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1488047 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.075 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:52.610 00:14:52.610 real 0m19.972s 00:14:52.610 user 0m58.863s 00:14:52.610 sys 0m3.890s 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:52.610 ************************************ 00:14:52.610 END TEST nvmf_connect_disconnect 00:14:52.610 ************************************ 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.610 ************************************ 00:14:52.610 START TEST nvmf_multitarget 00:14:52.610 ************************************ 00:14:52.610 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.610 * Looking for test storage... 00:14:52.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:52.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.610 --rc genhtml_branch_coverage=1 00:14:52.610 --rc genhtml_function_coverage=1 00:14:52.610 --rc genhtml_legend=1 00:14:52.610 --rc geninfo_all_blocks=1 00:14:52.610 --rc geninfo_unexecuted_blocks=1 00:14:52.610 00:14:52.610 ' 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:52.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.610 --rc genhtml_branch_coverage=1 00:14:52.610 --rc genhtml_function_coverage=1 00:14:52.610 --rc genhtml_legend=1 00:14:52.610 --rc geninfo_all_blocks=1 00:14:52.610 --rc geninfo_unexecuted_blocks=1 00:14:52.610 00:14:52.610 ' 00:14:52.610 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:52.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.610 --rc genhtml_branch_coverage=1 00:14:52.610 --rc genhtml_function_coverage=1 00:14:52.610 --rc genhtml_legend=1 00:14:52.611 --rc geninfo_all_blocks=1 00:14:52.611 --rc geninfo_unexecuted_blocks=1 00:14:52.611 00:14:52.611 ' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.611 --rc genhtml_branch_coverage=1 00:14:52.611 --rc genhtml_function_coverage=1 00:14:52.611 --rc genhtml_legend=1 00:14:52.611 --rc geninfo_all_blocks=1 00:14:52.611 --rc geninfo_unexecuted_blocks=1 00:14:52.611 00:14:52.611 ' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.611 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.142 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.142 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:55.142 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:55.142 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:55.143 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:55.143 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:55.143 Found net devices under 0000:84:00.0: cvl_0_0 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:55.143 Found net devices under 0000:84:00.1: cvl_0_1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:55.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:14:55.143 00:14:55.143 --- 10.0.0.2 ping statistics --- 00:14:55.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.143 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:14:55.143 00:14:55.143 --- 10.0.0.1 ping statistics --- 00:14:55.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.143 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.143 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1491834 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1491834 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1491834 ']' 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.144 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.144 [2024-10-07 09:35:49.846650] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:14:55.144 [2024-10-07 09:35:49.846805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.144 [2024-10-07 09:35:49.940637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.402 [2024-10-07 09:35:50.066176] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.402 [2024-10-07 09:35:50.066240] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.402 [2024-10-07 09:35:50.066257] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.402 [2024-10-07 09:35:50.066271] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.402 [2024-10-07 09:35:50.066283] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.402 [2024-10-07 09:35:50.068127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.402 [2024-10-07 09:35:50.068196] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.402 [2024-10-07 09:35:50.068287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.402 [2024-10-07 09:35:50.068290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.402 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.402 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:55.402 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:55.402 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.402 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:55.661 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:55.919 "nvmf_tgt_1" 00:14:55.919 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:55.919 "nvmf_tgt_2" 00:14:55.919 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.919 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:56.176 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:56.176 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:56.434 true 00:14:56.434 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:56.434 true 00:14:56.434 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:56.434 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:56.692 rmmod nvme_tcp 00:14:56.692 rmmod nvme_fabrics 00:14:56.692 rmmod nvme_keyring 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1491834 ']' 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1491834 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1491834 ']' 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1491834 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.692 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491834 00:14:56.950 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.950 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.950 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491834' 00:14:56.950 killing process with pid 1491834 00:14:56.950 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1491834 00:14:56.950 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1491834 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.208 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:59.110 00:14:59.110 real 0m6.868s 00:14:59.110 user 0m8.820s 00:14:59.110 sys 0m2.502s 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.110 ************************************ 00:14:59.110 END TEST nvmf_multitarget 00:14:59.110 ************************************ 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:59.110 ************************************ 00:14:59.110 START TEST nvmf_rpc 00:14:59.110 ************************************ 00:14:59.110 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:59.370 * Looking for test storage... 00:14:59.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.370 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:59.370 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:14:59.370 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.370 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.371 --rc genhtml_branch_coverage=1 00:14:59.371 --rc genhtml_function_coverage=1 00:14:59.371 --rc genhtml_legend=1 00:14:59.371 --rc geninfo_all_blocks=1 00:14:59.371 --rc geninfo_unexecuted_blocks=1 00:14:59.371 00:14:59.371 ' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.371 --rc genhtml_branch_coverage=1 00:14:59.371 --rc genhtml_function_coverage=1 00:14:59.371 --rc genhtml_legend=1 00:14:59.371 --rc geninfo_all_blocks=1 00:14:59.371 --rc geninfo_unexecuted_blocks=1 00:14:59.371 00:14:59.371 ' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.371 --rc genhtml_branch_coverage=1 00:14:59.371 --rc genhtml_function_coverage=1 00:14:59.371 --rc genhtml_legend=1 00:14:59.371 --rc geninfo_all_blocks=1 00:14:59.371 --rc geninfo_unexecuted_blocks=1 00:14:59.371 00:14:59.371 ' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.371 --rc genhtml_branch_coverage=1 00:14:59.371 --rc genhtml_function_coverage=1 00:14:59.371 --rc genhtml_legend=1 00:14:59.371 --rc geninfo_all_blocks=1 00:14:59.371 --rc geninfo_unexecuted_blocks=1 00:14:59.371 00:14:59.371 ' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:59.371 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:01.905 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:01.905 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:01.905 Found net devices under 0000:84:00.0: cvl_0_0 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:01.905 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:01.906 Found net devices under 0000:84:00.1: cvl_0_1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.906 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:02.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:15:02.165 00:15:02.165 --- 10.0.0.2 ping statistics --- 00:15:02.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.165 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:02.165 00:15:02.165 --- 10.0.0.1 ping statistics --- 00:15:02.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.165 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1494082 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1494082 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1494082 ']' 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.165 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.165 [2024-10-07 09:35:56.814625] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:15:02.165 [2024-10-07 09:35:56.814697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.165 [2024-10-07 09:35:56.879215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.424 [2024-10-07 09:35:56.987001] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.424 [2024-10-07 09:35:56.987050] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.424 [2024-10-07 09:35:56.987073] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.424 [2024-10-07 09:35:56.987084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.424 [2024-10-07 09:35:56.987094] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.424 [2024-10-07 09:35:56.988816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.424 [2024-10-07 09:35:56.988911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.424 [2024-10-07 09:35:56.988840] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.424 [2024-10-07 09:35:56.988929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:02.424 "tick_rate": 2700000000, 00:15:02.424 "poll_groups": [ 00:15:02.424 { 00:15:02.424 "name": "nvmf_tgt_poll_group_000", 00:15:02.424 "admin_qpairs": 0, 00:15:02.424 "io_qpairs": 0, 00:15:02.424 "current_admin_qpairs": 0, 00:15:02.424 "current_io_qpairs": 0, 00:15:02.424 "pending_bdev_io": 0, 00:15:02.424 "completed_nvme_io": 0, 00:15:02.424 "transports": [] 00:15:02.424 }, 00:15:02.424 { 00:15:02.424 "name": "nvmf_tgt_poll_group_001", 00:15:02.424 "admin_qpairs": 0, 00:15:02.424 "io_qpairs": 0, 00:15:02.424 "current_admin_qpairs": 0, 00:15:02.424 "current_io_qpairs": 0, 00:15:02.424 "pending_bdev_io": 0, 00:15:02.424 "completed_nvme_io": 0, 00:15:02.424 "transports": [] 00:15:02.424 }, 00:15:02.424 { 00:15:02.424 "name": "nvmf_tgt_poll_group_002", 00:15:02.424 "admin_qpairs": 0, 00:15:02.424 "io_qpairs": 0, 00:15:02.424 "current_admin_qpairs": 0, 00:15:02.424 "current_io_qpairs": 0, 00:15:02.424 "pending_bdev_io": 0, 00:15:02.424 "completed_nvme_io": 0, 00:15:02.424 "transports": [] 00:15:02.424 }, 00:15:02.424 { 00:15:02.424 "name": "nvmf_tgt_poll_group_003", 00:15:02.424 "admin_qpairs": 0, 00:15:02.424 "io_qpairs": 0, 00:15:02.424 "current_admin_qpairs": 0, 00:15:02.424 "current_io_qpairs": 0, 00:15:02.424 "pending_bdev_io": 0, 00:15:02.424 "completed_nvme_io": 0, 00:15:02.424 "transports": [] 00:15:02.424 } 00:15:02.424 ] 00:15:02.424 }' 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:02.424 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.683 [2024-10-07 09:35:57.292463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:02.683 "tick_rate": 2700000000, 00:15:02.683 "poll_groups": [ 00:15:02.683 { 00:15:02.683 "name": "nvmf_tgt_poll_group_000", 00:15:02.683 "admin_qpairs": 0, 00:15:02.683 "io_qpairs": 0, 00:15:02.683 "current_admin_qpairs": 0, 00:15:02.683 "current_io_qpairs": 0, 00:15:02.683 "pending_bdev_io": 0, 00:15:02.683 "completed_nvme_io": 0, 00:15:02.683 "transports": [ 00:15:02.683 { 00:15:02.683 "trtype": "TCP" 00:15:02.683 } 00:15:02.683 ] 00:15:02.683 }, 00:15:02.683 { 00:15:02.683 "name": "nvmf_tgt_poll_group_001", 00:15:02.683 "admin_qpairs": 0, 00:15:02.683 "io_qpairs": 0, 00:15:02.683 "current_admin_qpairs": 0, 00:15:02.683 "current_io_qpairs": 0, 00:15:02.683 "pending_bdev_io": 0, 00:15:02.683 "completed_nvme_io": 0, 00:15:02.683 "transports": [ 00:15:02.683 { 00:15:02.683 "trtype": "TCP" 00:15:02.683 } 00:15:02.683 ] 00:15:02.683 }, 00:15:02.683 { 00:15:02.683 "name": "nvmf_tgt_poll_group_002", 00:15:02.683 "admin_qpairs": 0, 00:15:02.683 "io_qpairs": 0, 00:15:02.683 "current_admin_qpairs": 0, 00:15:02.683 "current_io_qpairs": 0, 00:15:02.683 "pending_bdev_io": 0, 00:15:02.683 "completed_nvme_io": 0, 00:15:02.683 "transports": [ 00:15:02.683 { 00:15:02.683 "trtype": "TCP" 00:15:02.683 } 00:15:02.683 ] 00:15:02.683 }, 00:15:02.683 { 00:15:02.683 "name": "nvmf_tgt_poll_group_003", 00:15:02.683 "admin_qpairs": 0, 00:15:02.683 "io_qpairs": 0, 00:15:02.683 "current_admin_qpairs": 0, 00:15:02.683 "current_io_qpairs": 0, 00:15:02.683 "pending_bdev_io": 0, 00:15:02.683 "completed_nvme_io": 0, 00:15:02.683 "transports": [ 00:15:02.683 { 00:15:02.683 "trtype": "TCP" 00:15:02.683 } 00:15:02.683 ] 00:15:02.683 } 00:15:02.683 ] 00:15:02.683 }' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:02.683 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.684 Malloc1 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.684 [2024-10-07 09:35:57.458337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:02.684 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:15:02.684 [2024-10-07 09:35:57.491131] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:15:02.942 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:02.942 could not add new controller: failed to write to nvme-fabrics device 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.942 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.507 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.507 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.507 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.507 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.507 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.403 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:05.661 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.662 [2024-10-07 09:36:00.301473] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:15:05.662 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:05.662 could not add new controller: failed to write to nvme-fabrics device 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.662 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.227 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.227 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.227 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.227 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.227 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.755 [2024-10-07 09:36:03.189561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.755 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.756 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.013 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.013 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.014 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.014 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:09.014 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.539 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.540 [2024-10-07 09:36:05.931524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.540 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.104 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.104 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.104 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.104 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:12.104 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 [2024-10-07 09:36:08.771998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.935 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.935 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.935 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.935 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:14.935 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.833 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 [2024-10-07 09:36:11.553630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.834 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.399 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.399 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.399 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.399 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.399 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 [2024-10-07 09:36:14.332250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.927 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:20.493 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.493 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:20.493 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.493 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:20.493 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 [2024-10-07 09:36:17.227654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.428 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 [2024-10-07 09:36:17.275725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 [2024-10-07 09:36:17.323899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 [2024-10-07 09:36:17.372079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 [2024-10-07 09:36:17.420238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.687 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:22.688 "tick_rate": 2700000000, 00:15:22.688 "poll_groups": [ 00:15:22.688 { 00:15:22.688 "name": "nvmf_tgt_poll_group_000", 00:15:22.688 "admin_qpairs": 2, 00:15:22.688 "io_qpairs": 84, 00:15:22.688 "current_admin_qpairs": 0, 00:15:22.688 "current_io_qpairs": 0, 00:15:22.688 "pending_bdev_io": 0, 00:15:22.688 "completed_nvme_io": 172, 00:15:22.688 "transports": [ 00:15:22.688 { 00:15:22.688 "trtype": "TCP" 00:15:22.688 } 00:15:22.688 ] 00:15:22.688 }, 00:15:22.688 { 00:15:22.688 "name": "nvmf_tgt_poll_group_001", 00:15:22.688 "admin_qpairs": 2, 00:15:22.688 "io_qpairs": 84, 00:15:22.688 "current_admin_qpairs": 0, 00:15:22.688 "current_io_qpairs": 0, 00:15:22.688 "pending_bdev_io": 0, 00:15:22.688 "completed_nvme_io": 185, 00:15:22.688 "transports": [ 00:15:22.688 { 00:15:22.688 "trtype": "TCP" 00:15:22.688 } 00:15:22.688 ] 00:15:22.688 }, 00:15:22.688 { 00:15:22.688 "name": "nvmf_tgt_poll_group_002", 00:15:22.688 "admin_qpairs": 1, 00:15:22.688 "io_qpairs": 84, 00:15:22.688 "current_admin_qpairs": 0, 00:15:22.688 "current_io_qpairs": 0, 00:15:22.688 "pending_bdev_io": 0, 00:15:22.688 "completed_nvme_io": 205, 00:15:22.688 "transports": [ 00:15:22.688 { 00:15:22.688 "trtype": "TCP" 00:15:22.688 } 00:15:22.688 ] 00:15:22.688 }, 00:15:22.688 { 00:15:22.688 "name": "nvmf_tgt_poll_group_003", 00:15:22.688 "admin_qpairs": 2, 00:15:22.688 "io_qpairs": 84, 00:15:22.688 "current_admin_qpairs": 0, 00:15:22.688 "current_io_qpairs": 0, 00:15:22.688 "pending_bdev_io": 0, 00:15:22.688 "completed_nvme_io": 124, 00:15:22.688 "transports": [ 00:15:22.688 { 00:15:22.688 "trtype": "TCP" 00:15:22.688 } 00:15:22.688 ] 00:15:22.688 } 00:15:22.688 ] 00:15:22.688 }' 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:22.688 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.946 rmmod nvme_tcp 00:15:22.946 rmmod nvme_fabrics 00:15:22.946 rmmod nvme_keyring 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1494082 ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1494082 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1494082 ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1494082 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494082 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494082' 00:15:22.946 killing process with pid 1494082 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1494082 00:15:22.946 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1494082 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.207 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:25.748 00:15:25.748 real 0m26.150s 00:15:25.748 user 1m23.519s 00:15:25.748 sys 0m4.641s 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 ************************************ 00:15:25.748 END TEST nvmf_rpc 00:15:25.748 ************************************ 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 ************************************ 00:15:25.748 START TEST nvmf_invalid 00:15:25.748 ************************************ 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:25.748 * Looking for test storage... 00:15:25.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:25.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.748 --rc genhtml_branch_coverage=1 00:15:25.748 --rc genhtml_function_coverage=1 00:15:25.748 --rc genhtml_legend=1 00:15:25.748 --rc geninfo_all_blocks=1 00:15:25.748 --rc geninfo_unexecuted_blocks=1 00:15:25.748 00:15:25.748 ' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:25.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.748 --rc genhtml_branch_coverage=1 00:15:25.748 --rc genhtml_function_coverage=1 00:15:25.748 --rc genhtml_legend=1 00:15:25.748 --rc geninfo_all_blocks=1 00:15:25.748 --rc geninfo_unexecuted_blocks=1 00:15:25.748 00:15:25.748 ' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:25.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.748 --rc genhtml_branch_coverage=1 00:15:25.748 --rc genhtml_function_coverage=1 00:15:25.748 --rc genhtml_legend=1 00:15:25.748 --rc geninfo_all_blocks=1 00:15:25.748 --rc geninfo_unexecuted_blocks=1 00:15:25.748 00:15:25.748 ' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:25.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.748 --rc genhtml_branch_coverage=1 00:15:25.748 --rc genhtml_function_coverage=1 00:15:25.748 --rc genhtml_legend=1 00:15:25.748 --rc geninfo_all_blocks=1 00:15:25.748 --rc geninfo_unexecuted_blocks=1 00:15:25.748 00:15:25.748 ' 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.748 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:25.749 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:28.285 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:28.285 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:28.285 Found net devices under 0000:84:00.0: cvl_0_0 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:28.285 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:28.286 Found net devices under 0000:84:00.1: cvl_0_1 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.286 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:28.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:15:28.286 00:15:28.286 --- 10.0.0.2 ping statistics --- 00:15:28.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.286 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:15:28.286 00:15:28.286 --- 10.0.0.1 ping statistics --- 00:15:28.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.286 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:28.286 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1499205 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1499205 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1499205 ']' 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.544 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 [2024-10-07 09:36:23.177558] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:15:28.544 [2024-10-07 09:36:23.177637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.544 [2024-10-07 09:36:23.245391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.544 [2024-10-07 09:36:23.355825] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.544 [2024-10-07 09:36:23.355884] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.544 [2024-10-07 09:36:23.355921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.544 [2024-10-07 09:36:23.355934] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.544 [2024-10-07 09:36:23.355944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.544 [2024-10-07 09:36:23.357848] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.544 [2024-10-07 09:36:23.357869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.544 [2024-10-07 09:36:23.357898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.544 [2024-10-07 09:36:23.357902] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:28.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26290 00:15:29.062 [2024-10-07 09:36:23.824480] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:29.062 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:29.062 { 00:15:29.062 "nqn": "nqn.2016-06.io.spdk:cnode26290", 00:15:29.062 "tgt_name": "foobar", 00:15:29.062 "method": "nvmf_create_subsystem", 00:15:29.062 "req_id": 1 00:15:29.062 } 00:15:29.062 Got JSON-RPC error response 00:15:29.062 response: 00:15:29.062 { 00:15:29.062 "code": -32603, 00:15:29.062 "message": "Unable to find target foobar" 00:15:29.062 }' 00:15:29.062 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:29.062 { 00:15:29.062 "nqn": "nqn.2016-06.io.spdk:cnode26290", 00:15:29.062 "tgt_name": "foobar", 00:15:29.062 "method": "nvmf_create_subsystem", 00:15:29.062 "req_id": 1 00:15:29.062 } 00:15:29.062 Got JSON-RPC error response 00:15:29.062 response: 00:15:29.062 { 00:15:29.062 "code": -32603, 00:15:29.062 "message": "Unable to find target foobar" 00:15:29.062 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:29.062 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:29.062 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23400 00:15:29.624 [2024-10-07 09:36:24.201785] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23400: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:29.624 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:29.624 { 00:15:29.624 "nqn": "nqn.2016-06.io.spdk:cnode23400", 00:15:29.624 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:29.624 "method": "nvmf_create_subsystem", 00:15:29.624 "req_id": 1 00:15:29.624 } 00:15:29.624 Got JSON-RPC error response 00:15:29.624 response: 00:15:29.624 { 00:15:29.624 "code": -32602, 00:15:29.624 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:29.624 }' 00:15:29.624 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:29.624 { 00:15:29.624 "nqn": "nqn.2016-06.io.spdk:cnode23400", 00:15:29.624 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:29.624 "method": "nvmf_create_subsystem", 00:15:29.624 "req_id": 1 00:15:29.624 } 00:15:29.624 Got JSON-RPC error response 00:15:29.624 response: 00:15:29.624 { 00:15:29.624 "code": -32602, 00:15:29.624 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:29.624 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:29.624 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:29.624 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2695 00:15:29.883 [2024-10-07 09:36:24.575013] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2695: invalid model number 'SPDK_Controller' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:29.883 { 00:15:29.883 "nqn": "nqn.2016-06.io.spdk:cnode2695", 00:15:29.883 "model_number": "SPDK_Controller\u001f", 00:15:29.883 "method": "nvmf_create_subsystem", 00:15:29.883 "req_id": 1 00:15:29.883 } 00:15:29.883 Got JSON-RPC error response 00:15:29.883 response: 00:15:29.883 { 00:15:29.883 "code": -32602, 00:15:29.883 "message": "Invalid MN SPDK_Controller\u001f" 00:15:29.883 }' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:29.883 { 00:15:29.883 "nqn": "nqn.2016-06.io.spdk:cnode2695", 00:15:29.883 "model_number": "SPDK_Controller\u001f", 00:15:29.883 "method": "nvmf_create_subsystem", 00:15:29.883 "req_id": 1 00:15:29.883 } 00:15:29.883 Got JSON-RPC error response 00:15:29.883 response: 00:15:29.883 { 00:15:29.883 "code": -32602, 00:15:29.883 "message": "Invalid MN SPDK_Controller\u001f" 00:15:29.883 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.883 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';Z\LU*&{YY"Pi]Ot"RXx' 00:15:29.884 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';Z\LU*&{YY"Pi]Ot"RXx' nqn.2016-06.io.spdk:cnode19015 00:15:30.467 [2024-10-07 09:36:25.052519] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19015: invalid serial number ';Z\LU*&{YY"Pi]Ot"RXx' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:30.467 { 00:15:30.467 "nqn": "nqn.2016-06.io.spdk:cnode19015", 00:15:30.467 "serial_number": ";Z\\LU*&{YY\"Pi]Ot\"RXx\u007f", 00:15:30.467 "method": "nvmf_create_subsystem", 00:15:30.467 "req_id": 1 00:15:30.467 } 00:15:30.467 Got JSON-RPC error response 00:15:30.467 response: 00:15:30.467 { 00:15:30.467 "code": -32602, 00:15:30.467 "message": "Invalid SN ;Z\\LU*&{YY\"Pi]Ot\"RXx\u007f" 00:15:30.467 }' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:30.467 { 00:15:30.467 "nqn": "nqn.2016-06.io.spdk:cnode19015", 00:15:30.467 "serial_number": ";Z\\LU*&{YY\"Pi]Ot\"RXx\u007f", 00:15:30.467 "method": "nvmf_create_subsystem", 00:15:30.467 "req_id": 1 00:15:30.467 } 00:15:30.467 Got JSON-RPC error response 00:15:30.467 response: 00:15:30.467 { 00:15:30.467 "code": -32602, 00:15:30.467 "message": "Invalid SN ;Z\\LU*&{YY\"Pi]Ot\"RXx\u007f" 00:15:30.467 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:30.467 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:30.468 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9W102tG;gPCp,P)b>V%7`c"n>v43qW1O):6>#Wn' 00:15:30.469 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9W102tG;gPCp,P)b>V%7`c"n>v43qW1O):6>#Wn' nqn.2016-06.io.spdk:cnode25087 00:15:31.072 [2024-10-07 09:36:25.847204] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25087: invalid model number '9W102tG;gPCp,P)b>V%7`c"n>v43qW1O):6>#Wn' 00:15:31.072 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:31.072 { 00:15:31.072 "nqn": "nqn.2016-06.io.spdk:cnode25087", 00:15:31.072 "model_number": "9W10\u007f2tG;gPCp,P)b\u007f>V%7`c\"n>v43qW1O):6>#Wn", 00:15:31.072 "method": "nvmf_create_subsystem", 00:15:31.072 "req_id": 1 00:15:31.072 } 00:15:31.072 Got JSON-RPC error response 00:15:31.072 response: 00:15:31.072 { 00:15:31.072 "code": -32602, 00:15:31.072 "message": "Invalid MN 9W10\u007f2tG;gPCp,P)b\u007f>V%7`c\"n>v43qW1O):6>#Wn" 00:15:31.072 }' 00:15:31.072 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:31.072 { 00:15:31.072 "nqn": "nqn.2016-06.io.spdk:cnode25087", 00:15:31.072 "model_number": "9W10\u007f2tG;gPCp,P)b\u007f>V%7`c\"n>v43qW1O):6>#Wn", 00:15:31.072 "method": "nvmf_create_subsystem", 00:15:31.072 "req_id": 1 00:15:31.072 } 00:15:31.072 Got JSON-RPC error response 00:15:31.072 response: 00:15:31.072 { 00:15:31.072 "code": -32602, 00:15:31.072 "message": "Invalid MN 9W10\u007f2tG;gPCp,P)b\u007f>V%7`c\"n>v43qW1O):6>#Wn" 00:15:31.072 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:31.072 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:31.637 [2024-10-07 09:36:26.176354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.637 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:31.895 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:31.895 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:31.895 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:31.895 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:31.895 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:32.460 [2024-10-07 09:36:27.095322] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:32.460 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:32.460 { 00:15:32.460 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.460 "listen_address": { 00:15:32.460 "trtype": "tcp", 00:15:32.460 "traddr": "", 00:15:32.460 "trsvcid": "4421" 00:15:32.460 }, 00:15:32.460 "method": "nvmf_subsystem_remove_listener", 00:15:32.460 "req_id": 1 00:15:32.460 } 00:15:32.460 Got JSON-RPC error response 00:15:32.460 response: 00:15:32.460 { 00:15:32.460 "code": -32602, 00:15:32.460 "message": "Invalid parameters" 00:15:32.460 }' 00:15:32.460 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:32.460 { 00:15:32.460 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.460 "listen_address": { 00:15:32.460 "trtype": "tcp", 00:15:32.460 "traddr": "", 00:15:32.460 "trsvcid": "4421" 00:15:32.460 }, 00:15:32.460 "method": "nvmf_subsystem_remove_listener", 00:15:32.460 "req_id": 1 00:15:32.460 } 00:15:32.460 Got JSON-RPC error response 00:15:32.460 response: 00:15:32.460 { 00:15:32.461 "code": -32602, 00:15:32.461 "message": "Invalid parameters" 00:15:32.461 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:32.461 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25460 -i 0 00:15:32.720 [2024-10-07 09:36:27.428397] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25460: invalid cntlid range [0-65519] 00:15:32.720 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:32.720 { 00:15:32.720 "nqn": "nqn.2016-06.io.spdk:cnode25460", 00:15:32.720 "min_cntlid": 0, 00:15:32.720 "method": "nvmf_create_subsystem", 00:15:32.720 "req_id": 1 00:15:32.720 } 00:15:32.720 Got JSON-RPC error response 00:15:32.720 response: 00:15:32.720 { 00:15:32.720 "code": -32602, 00:15:32.720 "message": "Invalid cntlid range [0-65519]" 00:15:32.720 }' 00:15:32.720 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:32.720 { 00:15:32.720 "nqn": "nqn.2016-06.io.spdk:cnode25460", 00:15:32.720 "min_cntlid": 0, 00:15:32.720 "method": "nvmf_create_subsystem", 00:15:32.720 "req_id": 1 00:15:32.720 } 00:15:32.720 Got JSON-RPC error response 00:15:32.720 response: 00:15:32.720 { 00:15:32.720 "code": -32602, 00:15:32.720 "message": "Invalid cntlid range [0-65519]" 00:15:32.720 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.720 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23025 -i 65520 00:15:32.978 [2024-10-07 09:36:27.761496] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23025: invalid cntlid range [65520-65519] 00:15:32.978 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:32.978 { 00:15:32.978 "nqn": "nqn.2016-06.io.spdk:cnode23025", 00:15:32.978 "min_cntlid": 65520, 00:15:32.978 "method": "nvmf_create_subsystem", 00:15:32.978 "req_id": 1 00:15:32.978 } 00:15:32.978 Got JSON-RPC error response 00:15:32.978 response: 00:15:32.978 { 00:15:32.978 "code": -32602, 00:15:32.978 "message": "Invalid cntlid range [65520-65519]" 00:15:32.978 }' 00:15:32.978 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:32.978 { 00:15:32.978 "nqn": "nqn.2016-06.io.spdk:cnode23025", 00:15:32.978 "min_cntlid": 65520, 00:15:32.978 "method": "nvmf_create_subsystem", 00:15:32.978 "req_id": 1 00:15:32.978 } 00:15:32.978 Got JSON-RPC error response 00:15:32.978 response: 00:15:32.978 { 00:15:32.978 "code": -32602, 00:15:32.978 "message": "Invalid cntlid range [65520-65519]" 00:15:32.978 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.978 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4668 -I 0 00:15:33.544 [2024-10-07 09:36:28.150783] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4668: invalid cntlid range [1-0] 00:15:33.544 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:33.544 { 00:15:33.544 "nqn": "nqn.2016-06.io.spdk:cnode4668", 00:15:33.544 "max_cntlid": 0, 00:15:33.544 "method": "nvmf_create_subsystem", 00:15:33.544 "req_id": 1 00:15:33.544 } 00:15:33.544 Got JSON-RPC error response 00:15:33.544 response: 00:15:33.544 { 00:15:33.544 "code": -32602, 00:15:33.544 "message": "Invalid cntlid range [1-0]" 00:15:33.544 }' 00:15:33.544 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:33.544 { 00:15:33.544 "nqn": "nqn.2016-06.io.spdk:cnode4668", 00:15:33.544 "max_cntlid": 0, 00:15:33.544 "method": "nvmf_create_subsystem", 00:15:33.544 "req_id": 1 00:15:33.544 } 00:15:33.544 Got JSON-RPC error response 00:15:33.544 response: 00:15:33.544 { 00:15:33.544 "code": -32602, 00:15:33.544 "message": "Invalid cntlid range [1-0]" 00:15:33.544 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.544 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15140 -I 65520 00:15:34.107 [2024-10-07 09:36:28.825085] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15140: invalid cntlid range [1-65520] 00:15:34.107 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:34.107 { 00:15:34.107 "nqn": "nqn.2016-06.io.spdk:cnode15140", 00:15:34.107 "max_cntlid": 65520, 00:15:34.107 "method": "nvmf_create_subsystem", 00:15:34.107 "req_id": 1 00:15:34.107 } 00:15:34.107 Got JSON-RPC error response 00:15:34.108 response: 00:15:34.108 { 00:15:34.108 "code": -32602, 00:15:34.108 "message": "Invalid cntlid range [1-65520]" 00:15:34.108 }' 00:15:34.108 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:34.108 { 00:15:34.108 "nqn": "nqn.2016-06.io.spdk:cnode15140", 00:15:34.108 "max_cntlid": 65520, 00:15:34.108 "method": "nvmf_create_subsystem", 00:15:34.108 "req_id": 1 00:15:34.108 } 00:15:34.108 Got JSON-RPC error response 00:15:34.108 response: 00:15:34.108 { 00:15:34.108 "code": -32602, 00:15:34.108 "message": "Invalid cntlid range [1-65520]" 00:15:34.108 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:34.108 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15330 -i 6 -I 5 00:15:34.673 [2024-10-07 09:36:29.330663] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15330: invalid cntlid range [6-5] 00:15:34.673 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:34.673 { 00:15:34.673 "nqn": "nqn.2016-06.io.spdk:cnode15330", 00:15:34.673 "min_cntlid": 6, 00:15:34.673 "max_cntlid": 5, 00:15:34.674 "method": "nvmf_create_subsystem", 00:15:34.674 "req_id": 1 00:15:34.674 } 00:15:34.674 Got JSON-RPC error response 00:15:34.674 response: 00:15:34.674 { 00:15:34.674 "code": -32602, 00:15:34.674 "message": "Invalid cntlid range [6-5]" 00:15:34.674 }' 00:15:34.674 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:34.674 { 00:15:34.674 "nqn": "nqn.2016-06.io.spdk:cnode15330", 00:15:34.674 "min_cntlid": 6, 00:15:34.674 "max_cntlid": 5, 00:15:34.674 "method": "nvmf_create_subsystem", 00:15:34.674 "req_id": 1 00:15:34.674 } 00:15:34.674 Got JSON-RPC error response 00:15:34.674 response: 00:15:34.674 { 00:15:34.674 "code": -32602, 00:15:34.674 "message": "Invalid cntlid range [6-5]" 00:15:34.674 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:34.674 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:34.932 { 00:15:34.932 "name": "foobar", 00:15:34.932 "method": "nvmf_delete_target", 00:15:34.932 "req_id": 1 00:15:34.932 } 00:15:34.932 Got JSON-RPC error response 00:15:34.932 response: 00:15:34.932 { 00:15:34.932 "code": -32602, 00:15:34.932 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:34.932 }' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:34.932 { 00:15:34.932 "name": "foobar", 00:15:34.932 "method": "nvmf_delete_target", 00:15:34.932 "req_id": 1 00:15:34.932 } 00:15:34.932 Got JSON-RPC error response 00:15:34.932 response: 00:15:34.932 { 00:15:34.932 "code": -32602, 00:15:34.932 "message": "The specified target doesn't exist, cannot delete it." 00:15:34.932 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.932 rmmod nvme_tcp 00:15:34.932 rmmod nvme_fabrics 00:15:34.932 rmmod nvme_keyring 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1499205 ']' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1499205 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1499205 ']' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1499205 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499205 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499205' 00:15:34.932 killing process with pid 1499205 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1499205 00:15:34.932 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1499205 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.500 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:37.407 00:15:37.407 real 0m11.964s 00:15:37.407 user 0m32.517s 00:15:37.407 sys 0m3.267s 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:37.407 ************************************ 00:15:37.407 END TEST nvmf_invalid 00:15:37.407 ************************************ 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.407 ************************************ 00:15:37.407 START TEST nvmf_connect_stress 00:15:37.407 ************************************ 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:37.407 * Looking for test storage... 00:15:37.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:15:37.407 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.667 --rc genhtml_branch_coverage=1 00:15:37.667 --rc genhtml_function_coverage=1 00:15:37.667 --rc genhtml_legend=1 00:15:37.667 --rc geninfo_all_blocks=1 00:15:37.667 --rc geninfo_unexecuted_blocks=1 00:15:37.667 00:15:37.667 ' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.667 --rc genhtml_branch_coverage=1 00:15:37.667 --rc genhtml_function_coverage=1 00:15:37.667 --rc genhtml_legend=1 00:15:37.667 --rc geninfo_all_blocks=1 00:15:37.667 --rc geninfo_unexecuted_blocks=1 00:15:37.667 00:15:37.667 ' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.667 --rc genhtml_branch_coverage=1 00:15:37.667 --rc genhtml_function_coverage=1 00:15:37.667 --rc genhtml_legend=1 00:15:37.667 --rc geninfo_all_blocks=1 00:15:37.667 --rc geninfo_unexecuted_blocks=1 00:15:37.667 00:15:37.667 ' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.667 --rc genhtml_branch_coverage=1 00:15:37.667 --rc genhtml_function_coverage=1 00:15:37.667 --rc genhtml_legend=1 00:15:37.667 --rc geninfo_all_blocks=1 00:15:37.667 --rc geninfo_unexecuted_blocks=1 00:15:37.667 00:15:37.667 ' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.667 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:37.668 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:40.200 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:40.201 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:40.201 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:40.201 Found net devices under 0000:84:00.0: cvl_0_0 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:40.201 Found net devices under 0000:84:00.1: cvl_0_1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.201 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:40.201 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:40.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:15:40.460 00:15:40.460 --- 10.0.0.2 ping statistics --- 00:15:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.460 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:15:40.460 00:15:40.460 --- 10.0.0.1 ping statistics --- 00:15:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.460 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1502249 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1502249 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1502249 ']' 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.460 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 [2024-10-07 09:36:35.207671] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:15:40.460 [2024-10-07 09:36:35.207838] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.719 [2024-10-07 09:36:35.307516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.719 [2024-10-07 09:36:35.456246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.719 [2024-10-07 09:36:35.456365] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.719 [2024-10-07 09:36:35.456402] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.719 [2024-10-07 09:36:35.456435] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.719 [2024-10-07 09:36:35.456460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.719 [2024-10-07 09:36:35.458109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.719 [2024-10-07 09:36:35.458204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.719 [2024-10-07 09:36:35.458207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 [2024-10-07 09:36:36.234331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 [2024-10-07 09:36:36.262167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 NULL1 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1502409 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.651 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.652 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.910 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.910 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:41.910 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.910 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.910 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.168 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.168 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:42.168 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.168 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.168 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.733 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.733 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:42.733 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.733 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.733 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.991 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.991 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:42.991 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.991 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.991 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.250 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.250 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:43.250 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.250 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.250 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.507 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.507 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:43.507 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.507 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.768 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.768 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:43.768 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.768 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.768 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.337 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.337 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:44.337 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.337 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.337 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:44.595 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.595 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.852 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.852 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:44.852 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.852 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.852 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.110 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.110 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:45.110 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.110 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.110 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.368 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.369 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:45.369 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.369 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.369 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.934 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.934 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:45.934 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.934 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.934 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.193 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.193 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:46.193 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.193 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.193 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.452 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.452 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:46.452 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.452 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.452 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.711 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.711 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:46.711 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.711 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.711 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.988 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.988 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:46.988 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.988 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.988 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.552 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.552 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:47.552 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.552 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.552 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.809 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.809 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:47.809 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.809 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.809 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.065 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.065 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:48.065 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.065 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.065 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.322 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.322 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:48.322 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.322 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.322 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.579 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.579 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:48.579 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.579 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.579 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.142 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.142 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:49.142 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.142 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.142 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.399 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.399 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:49.399 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.399 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.399 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.655 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.655 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:49.655 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.655 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.655 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.912 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.912 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:49.912 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.912 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.912 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.476 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.476 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:50.476 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.476 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.476 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.733 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.734 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:50.734 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.734 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.734 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.991 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.991 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:50.991 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.991 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.991 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.248 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.248 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:51.248 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.248 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.248 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.506 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.506 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:51.506 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.506 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.506 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.763 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1502409 00:15:52.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1502409) - No such process 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1502409 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.020 rmmod nvme_tcp 00:15:52.020 rmmod nvme_fabrics 00:15:52.020 rmmod nvme_keyring 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:52.020 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1502249 ']' 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1502249 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1502249 ']' 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1502249 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1502249 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1502249' 00:15:52.021 killing process with pid 1502249 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1502249 00:15:52.021 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1502249 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.589 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.496 00:15:54.496 real 0m17.039s 00:15:54.496 user 0m41.182s 00:15:54.496 sys 0m6.699s 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 ************************************ 00:15:54.496 END TEST nvmf_connect_stress 00:15:54.496 ************************************ 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 ************************************ 00:15:54.496 START TEST nvmf_fused_ordering 00:15:54.496 ************************************ 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:54.496 * Looking for test storage... 00:15:54.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.496 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.765 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.766 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:57.335 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.335 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:57.336 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:57.336 Found net devices under 0000:84:00.0: cvl_0_0 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:57.336 Found net devices under 0000:84:00.1: cvl_0_1 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.336 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:57.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:15:57.336 00:15:57.336 --- 10.0.0.2 ping statistics --- 00:15:57.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.336 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:15:57.336 00:15:57.336 --- 10.0.0.1 ping statistics --- 00:15:57.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.336 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:57.336 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1505574 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1505574 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1505574 ']' 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.595 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.595 [2024-10-07 09:36:52.276328] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:15:57.596 [2024-10-07 09:36:52.276487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.596 [2024-10-07 09:36:52.389413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.855 [2024-10-07 09:36:52.577320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.855 [2024-10-07 09:36:52.577402] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.855 [2024-10-07 09:36:52.577429] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.855 [2024-10-07 09:36:52.577451] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.855 [2024-10-07 09:36:52.577483] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.855 [2024-10-07 09:36:52.578426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.792 [2024-10-07 09:36:53.382822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.792 [2024-10-07 09:36:53.399141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:58.792 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.793 NULL1 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.793 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:58.793 [2024-10-07 09:36:53.455453] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:15:58.793 [2024-10-07 09:36:53.455546] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505754 ] 00:15:59.359 Attached to nqn.2016-06.io.spdk:cnode1 00:15:59.359 Namespace ID: 1 size: 1GB 00:15:59.359 fused_ordering(0) 00:15:59.359 fused_ordering(1) 00:15:59.359 fused_ordering(2) 00:15:59.359 fused_ordering(3) 00:15:59.359 fused_ordering(4) 00:15:59.359 fused_ordering(5) 00:15:59.359 fused_ordering(6) 00:15:59.359 fused_ordering(7) 00:15:59.359 fused_ordering(8) 00:15:59.359 fused_ordering(9) 00:15:59.359 fused_ordering(10) 00:15:59.359 fused_ordering(11) 00:15:59.359 fused_ordering(12) 00:15:59.359 fused_ordering(13) 00:15:59.359 fused_ordering(14) 00:15:59.359 fused_ordering(15) 00:15:59.359 fused_ordering(16) 00:15:59.359 fused_ordering(17) 00:15:59.359 fused_ordering(18) 00:15:59.359 fused_ordering(19) 00:15:59.359 fused_ordering(20) 00:15:59.359 fused_ordering(21) 00:15:59.359 fused_ordering(22) 00:15:59.359 fused_ordering(23) 00:15:59.359 fused_ordering(24) 00:15:59.359 fused_ordering(25) 00:15:59.359 fused_ordering(26) 00:15:59.359 fused_ordering(27) 00:15:59.359 fused_ordering(28) 00:15:59.359 fused_ordering(29) 00:15:59.359 fused_ordering(30) 00:15:59.359 fused_ordering(31) 00:15:59.359 fused_ordering(32) 00:15:59.359 fused_ordering(33) 00:15:59.359 fused_ordering(34) 00:15:59.359 fused_ordering(35) 00:15:59.359 fused_ordering(36) 00:15:59.359 fused_ordering(37) 00:15:59.359 fused_ordering(38) 00:15:59.359 fused_ordering(39) 00:15:59.359 fused_ordering(40) 00:15:59.359 fused_ordering(41) 00:15:59.359 fused_ordering(42) 00:15:59.359 fused_ordering(43) 00:15:59.359 fused_ordering(44) 00:15:59.359 fused_ordering(45) 00:15:59.359 fused_ordering(46) 00:15:59.359 fused_ordering(47) 00:15:59.359 fused_ordering(48) 00:15:59.359 fused_ordering(49) 00:15:59.359 fused_ordering(50) 00:15:59.359 fused_ordering(51) 00:15:59.359 fused_ordering(52) 00:15:59.359 fused_ordering(53) 00:15:59.359 fused_ordering(54) 00:15:59.359 fused_ordering(55) 00:15:59.359 fused_ordering(56) 00:15:59.359 fused_ordering(57) 00:15:59.359 fused_ordering(58) 00:15:59.359 fused_ordering(59) 00:15:59.359 fused_ordering(60) 00:15:59.359 fused_ordering(61) 00:15:59.359 fused_ordering(62) 00:15:59.359 fused_ordering(63) 00:15:59.359 fused_ordering(64) 00:15:59.359 fused_ordering(65) 00:15:59.359 fused_ordering(66) 00:15:59.359 fused_ordering(67) 00:15:59.359 fused_ordering(68) 00:15:59.359 fused_ordering(69) 00:15:59.359 fused_ordering(70) 00:15:59.359 fused_ordering(71) 00:15:59.359 fused_ordering(72) 00:15:59.359 fused_ordering(73) 00:15:59.359 fused_ordering(74) 00:15:59.359 fused_ordering(75) 00:15:59.359 fused_ordering(76) 00:15:59.359 fused_ordering(77) 00:15:59.359 fused_ordering(78) 00:15:59.359 fused_ordering(79) 00:15:59.359 fused_ordering(80) 00:15:59.359 fused_ordering(81) 00:15:59.359 fused_ordering(82) 00:15:59.359 fused_ordering(83) 00:15:59.359 fused_ordering(84) 00:15:59.359 fused_ordering(85) 00:15:59.359 fused_ordering(86) 00:15:59.359 fused_ordering(87) 00:15:59.359 fused_ordering(88) 00:15:59.359 fused_ordering(89) 00:15:59.359 fused_ordering(90) 00:15:59.359 fused_ordering(91) 00:15:59.359 fused_ordering(92) 00:15:59.359 fused_ordering(93) 00:15:59.359 fused_ordering(94) 00:15:59.359 fused_ordering(95) 00:15:59.359 fused_ordering(96) 00:15:59.359 fused_ordering(97) 00:15:59.359 fused_ordering(98) 00:15:59.359 fused_ordering(99) 00:15:59.359 fused_ordering(100) 00:15:59.359 fused_ordering(101) 00:15:59.359 fused_ordering(102) 00:15:59.359 fused_ordering(103) 00:15:59.359 fused_ordering(104) 00:15:59.359 fused_ordering(105) 00:15:59.359 fused_ordering(106) 00:15:59.359 fused_ordering(107) 00:15:59.359 fused_ordering(108) 00:15:59.359 fused_ordering(109) 00:15:59.359 fused_ordering(110) 00:15:59.359 fused_ordering(111) 00:15:59.359 fused_ordering(112) 00:15:59.359 fused_ordering(113) 00:15:59.359 fused_ordering(114) 00:15:59.359 fused_ordering(115) 00:15:59.359 fused_ordering(116) 00:15:59.359 fused_ordering(117) 00:15:59.359 fused_ordering(118) 00:15:59.359 fused_ordering(119) 00:15:59.359 fused_ordering(120) 00:15:59.359 fused_ordering(121) 00:15:59.359 fused_ordering(122) 00:15:59.359 fused_ordering(123) 00:15:59.359 fused_ordering(124) 00:15:59.359 fused_ordering(125) 00:15:59.359 fused_ordering(126) 00:15:59.359 fused_ordering(127) 00:15:59.359 fused_ordering(128) 00:15:59.359 fused_ordering(129) 00:15:59.359 fused_ordering(130) 00:15:59.359 fused_ordering(131) 00:15:59.359 fused_ordering(132) 00:15:59.359 fused_ordering(133) 00:15:59.359 fused_ordering(134) 00:15:59.359 fused_ordering(135) 00:15:59.359 fused_ordering(136) 00:15:59.359 fused_ordering(137) 00:15:59.359 fused_ordering(138) 00:15:59.359 fused_ordering(139) 00:15:59.359 fused_ordering(140) 00:15:59.359 fused_ordering(141) 00:15:59.359 fused_ordering(142) 00:15:59.359 fused_ordering(143) 00:15:59.359 fused_ordering(144) 00:15:59.359 fused_ordering(145) 00:15:59.359 fused_ordering(146) 00:15:59.359 fused_ordering(147) 00:15:59.359 fused_ordering(148) 00:15:59.359 fused_ordering(149) 00:15:59.359 fused_ordering(150) 00:15:59.359 fused_ordering(151) 00:15:59.359 fused_ordering(152) 00:15:59.359 fused_ordering(153) 00:15:59.359 fused_ordering(154) 00:15:59.359 fused_ordering(155) 00:15:59.359 fused_ordering(156) 00:15:59.359 fused_ordering(157) 00:15:59.359 fused_ordering(158) 00:15:59.359 fused_ordering(159) 00:15:59.359 fused_ordering(160) 00:15:59.359 fused_ordering(161) 00:15:59.359 fused_ordering(162) 00:15:59.359 fused_ordering(163) 00:15:59.359 fused_ordering(164) 00:15:59.359 fused_ordering(165) 00:15:59.359 fused_ordering(166) 00:15:59.359 fused_ordering(167) 00:15:59.359 fused_ordering(168) 00:15:59.359 fused_ordering(169) 00:15:59.359 fused_ordering(170) 00:15:59.359 fused_ordering(171) 00:15:59.359 fused_ordering(172) 00:15:59.359 fused_ordering(173) 00:15:59.359 fused_ordering(174) 00:15:59.359 fused_ordering(175) 00:15:59.359 fused_ordering(176) 00:15:59.359 fused_ordering(177) 00:15:59.359 fused_ordering(178) 00:15:59.359 fused_ordering(179) 00:15:59.359 fused_ordering(180) 00:15:59.359 fused_ordering(181) 00:15:59.359 fused_ordering(182) 00:15:59.359 fused_ordering(183) 00:15:59.359 fused_ordering(184) 00:15:59.359 fused_ordering(185) 00:15:59.359 fused_ordering(186) 00:15:59.359 fused_ordering(187) 00:15:59.359 fused_ordering(188) 00:15:59.359 fused_ordering(189) 00:15:59.359 fused_ordering(190) 00:15:59.359 fused_ordering(191) 00:15:59.359 fused_ordering(192) 00:15:59.359 fused_ordering(193) 00:15:59.359 fused_ordering(194) 00:15:59.359 fused_ordering(195) 00:15:59.359 fused_ordering(196) 00:15:59.359 fused_ordering(197) 00:15:59.359 fused_ordering(198) 00:15:59.359 fused_ordering(199) 00:15:59.359 fused_ordering(200) 00:15:59.359 fused_ordering(201) 00:15:59.359 fused_ordering(202) 00:15:59.359 fused_ordering(203) 00:15:59.359 fused_ordering(204) 00:15:59.359 fused_ordering(205) 00:15:59.926 fused_ordering(206) 00:15:59.926 fused_ordering(207) 00:15:59.926 fused_ordering(208) 00:15:59.926 fused_ordering(209) 00:15:59.926 fused_ordering(210) 00:15:59.926 fused_ordering(211) 00:15:59.926 fused_ordering(212) 00:15:59.926 fused_ordering(213) 00:15:59.926 fused_ordering(214) 00:15:59.926 fused_ordering(215) 00:15:59.926 fused_ordering(216) 00:15:59.926 fused_ordering(217) 00:15:59.926 fused_ordering(218) 00:15:59.926 fused_ordering(219) 00:15:59.926 fused_ordering(220) 00:15:59.926 fused_ordering(221) 00:15:59.926 fused_ordering(222) 00:15:59.926 fused_ordering(223) 00:15:59.926 fused_ordering(224) 00:15:59.926 fused_ordering(225) 00:15:59.926 fused_ordering(226) 00:15:59.926 fused_ordering(227) 00:15:59.926 fused_ordering(228) 00:15:59.926 fused_ordering(229) 00:15:59.926 fused_ordering(230) 00:15:59.926 fused_ordering(231) 00:15:59.926 fused_ordering(232) 00:15:59.926 fused_ordering(233) 00:15:59.926 fused_ordering(234) 00:15:59.926 fused_ordering(235) 00:15:59.926 fused_ordering(236) 00:15:59.926 fused_ordering(237) 00:15:59.926 fused_ordering(238) 00:15:59.926 fused_ordering(239) 00:15:59.926 fused_ordering(240) 00:15:59.926 fused_ordering(241) 00:15:59.926 fused_ordering(242) 00:15:59.926 fused_ordering(243) 00:15:59.926 fused_ordering(244) 00:15:59.926 fused_ordering(245) 00:15:59.926 fused_ordering(246) 00:15:59.926 fused_ordering(247) 00:15:59.926 fused_ordering(248) 00:15:59.926 fused_ordering(249) 00:15:59.926 fused_ordering(250) 00:15:59.926 fused_ordering(251) 00:15:59.926 fused_ordering(252) 00:15:59.926 fused_ordering(253) 00:15:59.926 fused_ordering(254) 00:15:59.926 fused_ordering(255) 00:15:59.926 fused_ordering(256) 00:15:59.926 fused_ordering(257) 00:15:59.926 fused_ordering(258) 00:15:59.926 fused_ordering(259) 00:15:59.926 fused_ordering(260) 00:15:59.926 fused_ordering(261) 00:15:59.926 fused_ordering(262) 00:15:59.926 fused_ordering(263) 00:15:59.926 fused_ordering(264) 00:15:59.926 fused_ordering(265) 00:15:59.926 fused_ordering(266) 00:15:59.926 fused_ordering(267) 00:15:59.926 fused_ordering(268) 00:15:59.926 fused_ordering(269) 00:15:59.926 fused_ordering(270) 00:15:59.926 fused_ordering(271) 00:15:59.926 fused_ordering(272) 00:15:59.926 fused_ordering(273) 00:15:59.926 fused_ordering(274) 00:15:59.926 fused_ordering(275) 00:15:59.926 fused_ordering(276) 00:15:59.926 fused_ordering(277) 00:15:59.926 fused_ordering(278) 00:15:59.926 fused_ordering(279) 00:15:59.926 fused_ordering(280) 00:15:59.926 fused_ordering(281) 00:15:59.926 fused_ordering(282) 00:15:59.926 fused_ordering(283) 00:15:59.926 fused_ordering(284) 00:15:59.926 fused_ordering(285) 00:15:59.926 fused_ordering(286) 00:15:59.926 fused_ordering(287) 00:15:59.926 fused_ordering(288) 00:15:59.926 fused_ordering(289) 00:15:59.926 fused_ordering(290) 00:15:59.926 fused_ordering(291) 00:15:59.926 fused_ordering(292) 00:15:59.926 fused_ordering(293) 00:15:59.926 fused_ordering(294) 00:15:59.926 fused_ordering(295) 00:15:59.926 fused_ordering(296) 00:15:59.926 fused_ordering(297) 00:15:59.926 fused_ordering(298) 00:15:59.926 fused_ordering(299) 00:15:59.926 fused_ordering(300) 00:15:59.926 fused_ordering(301) 00:15:59.926 fused_ordering(302) 00:15:59.926 fused_ordering(303) 00:15:59.926 fused_ordering(304) 00:15:59.926 fused_ordering(305) 00:15:59.926 fused_ordering(306) 00:15:59.926 fused_ordering(307) 00:15:59.926 fused_ordering(308) 00:15:59.926 fused_ordering(309) 00:15:59.926 fused_ordering(310) 00:15:59.926 fused_ordering(311) 00:15:59.926 fused_ordering(312) 00:15:59.926 fused_ordering(313) 00:15:59.926 fused_ordering(314) 00:15:59.926 fused_ordering(315) 00:15:59.926 fused_ordering(316) 00:15:59.926 fused_ordering(317) 00:15:59.926 fused_ordering(318) 00:15:59.926 fused_ordering(319) 00:15:59.926 fused_ordering(320) 00:15:59.926 fused_ordering(321) 00:15:59.926 fused_ordering(322) 00:15:59.926 fused_ordering(323) 00:15:59.926 fused_ordering(324) 00:15:59.926 fused_ordering(325) 00:15:59.926 fused_ordering(326) 00:15:59.926 fused_ordering(327) 00:15:59.926 fused_ordering(328) 00:15:59.926 fused_ordering(329) 00:15:59.926 fused_ordering(330) 00:15:59.926 fused_ordering(331) 00:15:59.926 fused_ordering(332) 00:15:59.926 fused_ordering(333) 00:15:59.926 fused_ordering(334) 00:15:59.926 fused_ordering(335) 00:15:59.926 fused_ordering(336) 00:15:59.926 fused_ordering(337) 00:15:59.926 fused_ordering(338) 00:15:59.926 fused_ordering(339) 00:15:59.926 fused_ordering(340) 00:15:59.926 fused_ordering(341) 00:15:59.926 fused_ordering(342) 00:15:59.926 fused_ordering(343) 00:15:59.926 fused_ordering(344) 00:15:59.926 fused_ordering(345) 00:15:59.926 fused_ordering(346) 00:15:59.926 fused_ordering(347) 00:15:59.926 fused_ordering(348) 00:15:59.926 fused_ordering(349) 00:15:59.926 fused_ordering(350) 00:15:59.926 fused_ordering(351) 00:15:59.926 fused_ordering(352) 00:15:59.926 fused_ordering(353) 00:15:59.926 fused_ordering(354) 00:15:59.926 fused_ordering(355) 00:15:59.926 fused_ordering(356) 00:15:59.926 fused_ordering(357) 00:15:59.926 fused_ordering(358) 00:15:59.926 fused_ordering(359) 00:15:59.926 fused_ordering(360) 00:15:59.926 fused_ordering(361) 00:15:59.926 fused_ordering(362) 00:15:59.926 fused_ordering(363) 00:15:59.926 fused_ordering(364) 00:15:59.926 fused_ordering(365) 00:15:59.926 fused_ordering(366) 00:15:59.926 fused_ordering(367) 00:15:59.926 fused_ordering(368) 00:15:59.926 fused_ordering(369) 00:15:59.926 fused_ordering(370) 00:15:59.926 fused_ordering(371) 00:15:59.926 fused_ordering(372) 00:15:59.926 fused_ordering(373) 00:15:59.926 fused_ordering(374) 00:15:59.926 fused_ordering(375) 00:15:59.926 fused_ordering(376) 00:15:59.926 fused_ordering(377) 00:15:59.926 fused_ordering(378) 00:15:59.926 fused_ordering(379) 00:15:59.926 fused_ordering(380) 00:15:59.926 fused_ordering(381) 00:15:59.926 fused_ordering(382) 00:15:59.926 fused_ordering(383) 00:15:59.926 fused_ordering(384) 00:15:59.926 fused_ordering(385) 00:15:59.926 fused_ordering(386) 00:15:59.926 fused_ordering(387) 00:15:59.926 fused_ordering(388) 00:15:59.926 fused_ordering(389) 00:15:59.926 fused_ordering(390) 00:15:59.926 fused_ordering(391) 00:15:59.926 fused_ordering(392) 00:15:59.926 fused_ordering(393) 00:15:59.926 fused_ordering(394) 00:15:59.926 fused_ordering(395) 00:15:59.926 fused_ordering(396) 00:15:59.926 fused_ordering(397) 00:15:59.926 fused_ordering(398) 00:15:59.926 fused_ordering(399) 00:15:59.926 fused_ordering(400) 00:15:59.926 fused_ordering(401) 00:15:59.926 fused_ordering(402) 00:15:59.926 fused_ordering(403) 00:15:59.926 fused_ordering(404) 00:15:59.926 fused_ordering(405) 00:15:59.926 fused_ordering(406) 00:15:59.926 fused_ordering(407) 00:15:59.926 fused_ordering(408) 00:15:59.926 fused_ordering(409) 00:15:59.926 fused_ordering(410) 00:16:00.186 fused_ordering(411) 00:16:00.186 fused_ordering(412) 00:16:00.186 fused_ordering(413) 00:16:00.186 fused_ordering(414) 00:16:00.186 fused_ordering(415) 00:16:00.186 fused_ordering(416) 00:16:00.186 fused_ordering(417) 00:16:00.186 fused_ordering(418) 00:16:00.186 fused_ordering(419) 00:16:00.186 fused_ordering(420) 00:16:00.186 fused_ordering(421) 00:16:00.186 fused_ordering(422) 00:16:00.186 fused_ordering(423) 00:16:00.186 fused_ordering(424) 00:16:00.186 fused_ordering(425) 00:16:00.186 fused_ordering(426) 00:16:00.186 fused_ordering(427) 00:16:00.186 fused_ordering(428) 00:16:00.186 fused_ordering(429) 00:16:00.186 fused_ordering(430) 00:16:00.186 fused_ordering(431) 00:16:00.186 fused_ordering(432) 00:16:00.186 fused_ordering(433) 00:16:00.186 fused_ordering(434) 00:16:00.186 fused_ordering(435) 00:16:00.186 fused_ordering(436) 00:16:00.186 fused_ordering(437) 00:16:00.186 fused_ordering(438) 00:16:00.186 fused_ordering(439) 00:16:00.186 fused_ordering(440) 00:16:00.186 fused_ordering(441) 00:16:00.186 fused_ordering(442) 00:16:00.186 fused_ordering(443) 00:16:00.186 fused_ordering(444) 00:16:00.186 fused_ordering(445) 00:16:00.186 fused_ordering(446) 00:16:00.186 fused_ordering(447) 00:16:00.186 fused_ordering(448) 00:16:00.186 fused_ordering(449) 00:16:00.186 fused_ordering(450) 00:16:00.186 fused_ordering(451) 00:16:00.186 fused_ordering(452) 00:16:00.186 fused_ordering(453) 00:16:00.186 fused_ordering(454) 00:16:00.186 fused_ordering(455) 00:16:00.186 fused_ordering(456) 00:16:00.186 fused_ordering(457) 00:16:00.186 fused_ordering(458) 00:16:00.186 fused_ordering(459) 00:16:00.186 fused_ordering(460) 00:16:00.186 fused_ordering(461) 00:16:00.186 fused_ordering(462) 00:16:00.186 fused_ordering(463) 00:16:00.186 fused_ordering(464) 00:16:00.186 fused_ordering(465) 00:16:00.186 fused_ordering(466) 00:16:00.186 fused_ordering(467) 00:16:00.186 fused_ordering(468) 00:16:00.186 fused_ordering(469) 00:16:00.186 fused_ordering(470) 00:16:00.186 fused_ordering(471) 00:16:00.186 fused_ordering(472) 00:16:00.186 fused_ordering(473) 00:16:00.186 fused_ordering(474) 00:16:00.186 fused_ordering(475) 00:16:00.186 fused_ordering(476) 00:16:00.186 fused_ordering(477) 00:16:00.186 fused_ordering(478) 00:16:00.186 fused_ordering(479) 00:16:00.186 fused_ordering(480) 00:16:00.186 fused_ordering(481) 00:16:00.186 fused_ordering(482) 00:16:00.186 fused_ordering(483) 00:16:00.186 fused_ordering(484) 00:16:00.186 fused_ordering(485) 00:16:00.186 fused_ordering(486) 00:16:00.186 fused_ordering(487) 00:16:00.187 fused_ordering(488) 00:16:00.187 fused_ordering(489) 00:16:00.187 fused_ordering(490) 00:16:00.187 fused_ordering(491) 00:16:00.187 fused_ordering(492) 00:16:00.187 fused_ordering(493) 00:16:00.187 fused_ordering(494) 00:16:00.187 fused_ordering(495) 00:16:00.187 fused_ordering(496) 00:16:00.187 fused_ordering(497) 00:16:00.187 fused_ordering(498) 00:16:00.187 fused_ordering(499) 00:16:00.187 fused_ordering(500) 00:16:00.187 fused_ordering(501) 00:16:00.187 fused_ordering(502) 00:16:00.187 fused_ordering(503) 00:16:00.187 fused_ordering(504) 00:16:00.187 fused_ordering(505) 00:16:00.187 fused_ordering(506) 00:16:00.187 fused_ordering(507) 00:16:00.187 fused_ordering(508) 00:16:00.187 fused_ordering(509) 00:16:00.187 fused_ordering(510) 00:16:00.187 fused_ordering(511) 00:16:00.187 fused_ordering(512) 00:16:00.187 fused_ordering(513) 00:16:00.187 fused_ordering(514) 00:16:00.187 fused_ordering(515) 00:16:00.187 fused_ordering(516) 00:16:00.187 fused_ordering(517) 00:16:00.187 fused_ordering(518) 00:16:00.187 fused_ordering(519) 00:16:00.187 fused_ordering(520) 00:16:00.187 fused_ordering(521) 00:16:00.187 fused_ordering(522) 00:16:00.187 fused_ordering(523) 00:16:00.187 fused_ordering(524) 00:16:00.187 fused_ordering(525) 00:16:00.187 fused_ordering(526) 00:16:00.187 fused_ordering(527) 00:16:00.187 fused_ordering(528) 00:16:00.187 fused_ordering(529) 00:16:00.187 fused_ordering(530) 00:16:00.187 fused_ordering(531) 00:16:00.187 fused_ordering(532) 00:16:00.187 fused_ordering(533) 00:16:00.187 fused_ordering(534) 00:16:00.187 fused_ordering(535) 00:16:00.187 fused_ordering(536) 00:16:00.187 fused_ordering(537) 00:16:00.187 fused_ordering(538) 00:16:00.187 fused_ordering(539) 00:16:00.187 fused_ordering(540) 00:16:00.187 fused_ordering(541) 00:16:00.187 fused_ordering(542) 00:16:00.187 fused_ordering(543) 00:16:00.187 fused_ordering(544) 00:16:00.187 fused_ordering(545) 00:16:00.187 fused_ordering(546) 00:16:00.187 fused_ordering(547) 00:16:00.187 fused_ordering(548) 00:16:00.187 fused_ordering(549) 00:16:00.187 fused_ordering(550) 00:16:00.187 fused_ordering(551) 00:16:00.187 fused_ordering(552) 00:16:00.187 fused_ordering(553) 00:16:00.187 fused_ordering(554) 00:16:00.187 fused_ordering(555) 00:16:00.187 fused_ordering(556) 00:16:00.187 fused_ordering(557) 00:16:00.187 fused_ordering(558) 00:16:00.187 fused_ordering(559) 00:16:00.187 fused_ordering(560) 00:16:00.187 fused_ordering(561) 00:16:00.187 fused_ordering(562) 00:16:00.187 fused_ordering(563) 00:16:00.187 fused_ordering(564) 00:16:00.187 fused_ordering(565) 00:16:00.187 fused_ordering(566) 00:16:00.187 fused_ordering(567) 00:16:00.187 fused_ordering(568) 00:16:00.187 fused_ordering(569) 00:16:00.187 fused_ordering(570) 00:16:00.187 fused_ordering(571) 00:16:00.187 fused_ordering(572) 00:16:00.187 fused_ordering(573) 00:16:00.187 fused_ordering(574) 00:16:00.187 fused_ordering(575) 00:16:00.187 fused_ordering(576) 00:16:00.187 fused_ordering(577) 00:16:00.187 fused_ordering(578) 00:16:00.187 fused_ordering(579) 00:16:00.187 fused_ordering(580) 00:16:00.187 fused_ordering(581) 00:16:00.187 fused_ordering(582) 00:16:00.187 fused_ordering(583) 00:16:00.187 fused_ordering(584) 00:16:00.187 fused_ordering(585) 00:16:00.187 fused_ordering(586) 00:16:00.187 fused_ordering(587) 00:16:00.187 fused_ordering(588) 00:16:00.187 fused_ordering(589) 00:16:00.187 fused_ordering(590) 00:16:00.187 fused_ordering(591) 00:16:00.187 fused_ordering(592) 00:16:00.187 fused_ordering(593) 00:16:00.187 fused_ordering(594) 00:16:00.187 fused_ordering(595) 00:16:00.187 fused_ordering(596) 00:16:00.187 fused_ordering(597) 00:16:00.187 fused_ordering(598) 00:16:00.187 fused_ordering(599) 00:16:00.187 fused_ordering(600) 00:16:00.187 fused_ordering(601) 00:16:00.187 fused_ordering(602) 00:16:00.187 fused_ordering(603) 00:16:00.187 fused_ordering(604) 00:16:00.187 fused_ordering(605) 00:16:00.187 fused_ordering(606) 00:16:00.187 fused_ordering(607) 00:16:00.187 fused_ordering(608) 00:16:00.187 fused_ordering(609) 00:16:00.187 fused_ordering(610) 00:16:00.187 fused_ordering(611) 00:16:00.187 fused_ordering(612) 00:16:00.187 fused_ordering(613) 00:16:00.187 fused_ordering(614) 00:16:00.187 fused_ordering(615) 00:16:00.755 fused_ordering(616) 00:16:00.755 fused_ordering(617) 00:16:00.755 fused_ordering(618) 00:16:00.755 fused_ordering(619) 00:16:00.755 fused_ordering(620) 00:16:00.755 fused_ordering(621) 00:16:00.755 fused_ordering(622) 00:16:00.755 fused_ordering(623) 00:16:00.755 fused_ordering(624) 00:16:00.755 fused_ordering(625) 00:16:00.755 fused_ordering(626) 00:16:00.755 fused_ordering(627) 00:16:00.755 fused_ordering(628) 00:16:00.755 fused_ordering(629) 00:16:00.755 fused_ordering(630) 00:16:00.755 fused_ordering(631) 00:16:00.755 fused_ordering(632) 00:16:00.755 fused_ordering(633) 00:16:00.755 fused_ordering(634) 00:16:00.755 fused_ordering(635) 00:16:00.755 fused_ordering(636) 00:16:00.755 fused_ordering(637) 00:16:00.755 fused_ordering(638) 00:16:00.755 fused_ordering(639) 00:16:00.755 fused_ordering(640) 00:16:00.755 fused_ordering(641) 00:16:00.755 fused_ordering(642) 00:16:00.755 fused_ordering(643) 00:16:00.755 fused_ordering(644) 00:16:00.755 fused_ordering(645) 00:16:00.755 fused_ordering(646) 00:16:00.755 fused_ordering(647) 00:16:00.755 fused_ordering(648) 00:16:00.755 fused_ordering(649) 00:16:00.755 fused_ordering(650) 00:16:00.755 fused_ordering(651) 00:16:00.755 fused_ordering(652) 00:16:00.755 fused_ordering(653) 00:16:00.755 fused_ordering(654) 00:16:00.755 fused_ordering(655) 00:16:00.755 fused_ordering(656) 00:16:00.755 fused_ordering(657) 00:16:00.755 fused_ordering(658) 00:16:00.755 fused_ordering(659) 00:16:00.755 fused_ordering(660) 00:16:00.755 fused_ordering(661) 00:16:00.755 fused_ordering(662) 00:16:00.755 fused_ordering(663) 00:16:00.755 fused_ordering(664) 00:16:00.755 fused_ordering(665) 00:16:00.755 fused_ordering(666) 00:16:00.755 fused_ordering(667) 00:16:00.755 fused_ordering(668) 00:16:00.755 fused_ordering(669) 00:16:00.755 fused_ordering(670) 00:16:00.755 fused_ordering(671) 00:16:00.755 fused_ordering(672) 00:16:00.755 fused_ordering(673) 00:16:00.755 fused_ordering(674) 00:16:00.755 fused_ordering(675) 00:16:00.755 fused_ordering(676) 00:16:00.755 fused_ordering(677) 00:16:00.755 fused_ordering(678) 00:16:00.755 fused_ordering(679) 00:16:00.755 fused_ordering(680) 00:16:00.755 fused_ordering(681) 00:16:00.755 fused_ordering(682) 00:16:00.755 fused_ordering(683) 00:16:00.755 fused_ordering(684) 00:16:00.755 fused_ordering(685) 00:16:00.756 fused_ordering(686) 00:16:00.756 fused_ordering(687) 00:16:00.756 fused_ordering(688) 00:16:00.756 fused_ordering(689) 00:16:00.756 fused_ordering(690) 00:16:00.756 fused_ordering(691) 00:16:00.756 fused_ordering(692) 00:16:00.756 fused_ordering(693) 00:16:00.756 fused_ordering(694) 00:16:00.756 fused_ordering(695) 00:16:00.756 fused_ordering(696) 00:16:00.756 fused_ordering(697) 00:16:00.756 fused_ordering(698) 00:16:00.756 fused_ordering(699) 00:16:00.756 fused_ordering(700) 00:16:00.756 fused_ordering(701) 00:16:00.756 fused_ordering(702) 00:16:00.756 fused_ordering(703) 00:16:00.756 fused_ordering(704) 00:16:00.756 fused_ordering(705) 00:16:00.756 fused_ordering(706) 00:16:00.756 fused_ordering(707) 00:16:00.756 fused_ordering(708) 00:16:00.756 fused_ordering(709) 00:16:00.756 fused_ordering(710) 00:16:00.756 fused_ordering(711) 00:16:00.756 fused_ordering(712) 00:16:00.756 fused_ordering(713) 00:16:00.756 fused_ordering(714) 00:16:00.756 fused_ordering(715) 00:16:00.756 fused_ordering(716) 00:16:00.756 fused_ordering(717) 00:16:00.756 fused_ordering(718) 00:16:00.756 fused_ordering(719) 00:16:00.756 fused_ordering(720) 00:16:00.756 fused_ordering(721) 00:16:00.756 fused_ordering(722) 00:16:00.756 fused_ordering(723) 00:16:00.756 fused_ordering(724) 00:16:00.756 fused_ordering(725) 00:16:00.756 fused_ordering(726) 00:16:00.756 fused_ordering(727) 00:16:00.756 fused_ordering(728) 00:16:00.756 fused_ordering(729) 00:16:00.756 fused_ordering(730) 00:16:00.756 fused_ordering(731) 00:16:00.756 fused_ordering(732) 00:16:00.756 fused_ordering(733) 00:16:00.756 fused_ordering(734) 00:16:00.756 fused_ordering(735) 00:16:00.756 fused_ordering(736) 00:16:00.756 fused_ordering(737) 00:16:00.756 fused_ordering(738) 00:16:00.756 fused_ordering(739) 00:16:00.756 fused_ordering(740) 00:16:00.756 fused_ordering(741) 00:16:00.756 fused_ordering(742) 00:16:00.756 fused_ordering(743) 00:16:00.756 fused_ordering(744) 00:16:00.756 fused_ordering(745) 00:16:00.756 fused_ordering(746) 00:16:00.756 fused_ordering(747) 00:16:00.756 fused_ordering(748) 00:16:00.756 fused_ordering(749) 00:16:00.756 fused_ordering(750) 00:16:00.756 fused_ordering(751) 00:16:00.756 fused_ordering(752) 00:16:00.756 fused_ordering(753) 00:16:00.756 fused_ordering(754) 00:16:00.756 fused_ordering(755) 00:16:00.756 fused_ordering(756) 00:16:00.756 fused_ordering(757) 00:16:00.756 fused_ordering(758) 00:16:00.756 fused_ordering(759) 00:16:00.756 fused_ordering(760) 00:16:00.756 fused_ordering(761) 00:16:00.756 fused_ordering(762) 00:16:00.756 fused_ordering(763) 00:16:00.756 fused_ordering(764) 00:16:00.756 fused_ordering(765) 00:16:00.756 fused_ordering(766) 00:16:00.756 fused_ordering(767) 00:16:00.756 fused_ordering(768) 00:16:00.756 fused_ordering(769) 00:16:00.756 fused_ordering(770) 00:16:00.756 fused_ordering(771) 00:16:00.756 fused_ordering(772) 00:16:00.756 fused_ordering(773) 00:16:00.756 fused_ordering(774) 00:16:00.756 fused_ordering(775) 00:16:00.756 fused_ordering(776) 00:16:00.756 fused_ordering(777) 00:16:00.756 fused_ordering(778) 00:16:00.756 fused_ordering(779) 00:16:00.756 fused_ordering(780) 00:16:00.756 fused_ordering(781) 00:16:00.756 fused_ordering(782) 00:16:00.756 fused_ordering(783) 00:16:00.756 fused_ordering(784) 00:16:00.756 fused_ordering(785) 00:16:00.756 fused_ordering(786) 00:16:00.756 fused_ordering(787) 00:16:00.756 fused_ordering(788) 00:16:00.756 fused_ordering(789) 00:16:00.756 fused_ordering(790) 00:16:00.756 fused_ordering(791) 00:16:00.756 fused_ordering(792) 00:16:00.756 fused_ordering(793) 00:16:00.756 fused_ordering(794) 00:16:00.756 fused_ordering(795) 00:16:00.756 fused_ordering(796) 00:16:00.756 fused_ordering(797) 00:16:00.756 fused_ordering(798) 00:16:00.756 fused_ordering(799) 00:16:00.756 fused_ordering(800) 00:16:00.756 fused_ordering(801) 00:16:00.756 fused_ordering(802) 00:16:00.756 fused_ordering(803) 00:16:00.756 fused_ordering(804) 00:16:00.756 fused_ordering(805) 00:16:00.756 fused_ordering(806) 00:16:00.756 fused_ordering(807) 00:16:00.756 fused_ordering(808) 00:16:00.756 fused_ordering(809) 00:16:00.756 fused_ordering(810) 00:16:00.756 fused_ordering(811) 00:16:00.756 fused_ordering(812) 00:16:00.756 fused_ordering(813) 00:16:00.756 fused_ordering(814) 00:16:00.756 fused_ordering(815) 00:16:00.756 fused_ordering(816) 00:16:00.756 fused_ordering(817) 00:16:00.756 fused_ordering(818) 00:16:00.756 fused_ordering(819) 00:16:00.756 fused_ordering(820) 00:16:01.692 fused_ordering(821) 00:16:01.692 fused_ordering(822) 00:16:01.692 fused_ordering(823) 00:16:01.692 fused_ordering(824) 00:16:01.692 fused_ordering(825) 00:16:01.692 fused_ordering(826) 00:16:01.692 fused_ordering(827) 00:16:01.692 fused_ordering(828) 00:16:01.692 fused_ordering(829) 00:16:01.692 fused_ordering(830) 00:16:01.692 fused_ordering(831) 00:16:01.692 fused_ordering(832) 00:16:01.692 fused_ordering(833) 00:16:01.692 fused_ordering(834) 00:16:01.692 fused_ordering(835) 00:16:01.692 fused_ordering(836) 00:16:01.692 fused_ordering(837) 00:16:01.692 fused_ordering(838) 00:16:01.692 fused_ordering(839) 00:16:01.692 fused_ordering(840) 00:16:01.692 fused_ordering(841) 00:16:01.692 fused_ordering(842) 00:16:01.692 fused_ordering(843) 00:16:01.692 fused_ordering(844) 00:16:01.692 fused_ordering(845) 00:16:01.692 fused_ordering(846) 00:16:01.692 fused_ordering(847) 00:16:01.692 fused_ordering(848) 00:16:01.692 fused_ordering(849) 00:16:01.692 fused_ordering(850) 00:16:01.692 fused_ordering(851) 00:16:01.692 fused_ordering(852) 00:16:01.692 fused_ordering(853) 00:16:01.692 fused_ordering(854) 00:16:01.692 fused_ordering(855) 00:16:01.692 fused_ordering(856) 00:16:01.692 fused_ordering(857) 00:16:01.692 fused_ordering(858) 00:16:01.692 fused_ordering(859) 00:16:01.692 fused_ordering(860) 00:16:01.692 fused_ordering(861) 00:16:01.692 fused_ordering(862) 00:16:01.692 fused_ordering(863) 00:16:01.692 fused_ordering(864) 00:16:01.692 fused_ordering(865) 00:16:01.692 fused_ordering(866) 00:16:01.692 fused_ordering(867) 00:16:01.692 fused_ordering(868) 00:16:01.692 fused_ordering(869) 00:16:01.692 fused_ordering(870) 00:16:01.692 fused_ordering(871) 00:16:01.692 fused_ordering(872) 00:16:01.692 fused_ordering(873) 00:16:01.692 fused_ordering(874) 00:16:01.692 fused_ordering(875) 00:16:01.692 fused_ordering(876) 00:16:01.693 fused_ordering(877) 00:16:01.693 fused_ordering(878) 00:16:01.693 fused_ordering(879) 00:16:01.693 fused_ordering(880) 00:16:01.693 fused_ordering(881) 00:16:01.693 fused_ordering(882) 00:16:01.693 fused_ordering(883) 00:16:01.693 fused_ordering(884) 00:16:01.693 fused_ordering(885) 00:16:01.693 fused_ordering(886) 00:16:01.693 fused_ordering(887) 00:16:01.693 fused_ordering(888) 00:16:01.693 fused_ordering(889) 00:16:01.693 fused_ordering(890) 00:16:01.693 fused_ordering(891) 00:16:01.693 fused_ordering(892) 00:16:01.693 fused_ordering(893) 00:16:01.693 fused_ordering(894) 00:16:01.693 fused_ordering(895) 00:16:01.693 fused_ordering(896) 00:16:01.693 fused_ordering(897) 00:16:01.693 fused_ordering(898) 00:16:01.693 fused_ordering(899) 00:16:01.693 fused_ordering(900) 00:16:01.693 fused_ordering(901) 00:16:01.693 fused_ordering(902) 00:16:01.693 fused_ordering(903) 00:16:01.693 fused_ordering(904) 00:16:01.693 fused_ordering(905) 00:16:01.693 fused_ordering(906) 00:16:01.693 fused_ordering(907) 00:16:01.693 fused_ordering(908) 00:16:01.693 fused_ordering(909) 00:16:01.693 fused_ordering(910) 00:16:01.693 fused_ordering(911) 00:16:01.693 fused_ordering(912) 00:16:01.693 fused_ordering(913) 00:16:01.693 fused_ordering(914) 00:16:01.693 fused_ordering(915) 00:16:01.693 fused_ordering(916) 00:16:01.693 fused_ordering(917) 00:16:01.693 fused_ordering(918) 00:16:01.693 fused_ordering(919) 00:16:01.693 fused_ordering(920) 00:16:01.693 fused_ordering(921) 00:16:01.693 fused_ordering(922) 00:16:01.693 fused_ordering(923) 00:16:01.693 fused_ordering(924) 00:16:01.693 fused_ordering(925) 00:16:01.693 fused_ordering(926) 00:16:01.693 fused_ordering(927) 00:16:01.693 fused_ordering(928) 00:16:01.693 fused_ordering(929) 00:16:01.693 fused_ordering(930) 00:16:01.693 fused_ordering(931) 00:16:01.693 fused_ordering(932) 00:16:01.693 fused_ordering(933) 00:16:01.693 fused_ordering(934) 00:16:01.693 fused_ordering(935) 00:16:01.693 fused_ordering(936) 00:16:01.693 fused_ordering(937) 00:16:01.693 fused_ordering(938) 00:16:01.693 fused_ordering(939) 00:16:01.693 fused_ordering(940) 00:16:01.693 fused_ordering(941) 00:16:01.693 fused_ordering(942) 00:16:01.693 fused_ordering(943) 00:16:01.693 fused_ordering(944) 00:16:01.693 fused_ordering(945) 00:16:01.693 fused_ordering(946) 00:16:01.693 fused_ordering(947) 00:16:01.693 fused_ordering(948) 00:16:01.693 fused_ordering(949) 00:16:01.693 fused_ordering(950) 00:16:01.693 fused_ordering(951) 00:16:01.693 fused_ordering(952) 00:16:01.693 fused_ordering(953) 00:16:01.693 fused_ordering(954) 00:16:01.693 fused_ordering(955) 00:16:01.693 fused_ordering(956) 00:16:01.693 fused_ordering(957) 00:16:01.693 fused_ordering(958) 00:16:01.693 fused_ordering(959) 00:16:01.693 fused_ordering(960) 00:16:01.693 fused_ordering(961) 00:16:01.693 fused_ordering(962) 00:16:01.693 fused_ordering(963) 00:16:01.693 fused_ordering(964) 00:16:01.693 fused_ordering(965) 00:16:01.693 fused_ordering(966) 00:16:01.693 fused_ordering(967) 00:16:01.693 fused_ordering(968) 00:16:01.693 fused_ordering(969) 00:16:01.693 fused_ordering(970) 00:16:01.693 fused_ordering(971) 00:16:01.693 fused_ordering(972) 00:16:01.693 fused_ordering(973) 00:16:01.693 fused_ordering(974) 00:16:01.693 fused_ordering(975) 00:16:01.693 fused_ordering(976) 00:16:01.693 fused_ordering(977) 00:16:01.693 fused_ordering(978) 00:16:01.693 fused_ordering(979) 00:16:01.693 fused_ordering(980) 00:16:01.693 fused_ordering(981) 00:16:01.693 fused_ordering(982) 00:16:01.693 fused_ordering(983) 00:16:01.693 fused_ordering(984) 00:16:01.693 fused_ordering(985) 00:16:01.693 fused_ordering(986) 00:16:01.693 fused_ordering(987) 00:16:01.693 fused_ordering(988) 00:16:01.693 fused_ordering(989) 00:16:01.693 fused_ordering(990) 00:16:01.693 fused_ordering(991) 00:16:01.693 fused_ordering(992) 00:16:01.693 fused_ordering(993) 00:16:01.693 fused_ordering(994) 00:16:01.693 fused_ordering(995) 00:16:01.693 fused_ordering(996) 00:16:01.693 fused_ordering(997) 00:16:01.693 fused_ordering(998) 00:16:01.693 fused_ordering(999) 00:16:01.693 fused_ordering(1000) 00:16:01.693 fused_ordering(1001) 00:16:01.693 fused_ordering(1002) 00:16:01.693 fused_ordering(1003) 00:16:01.693 fused_ordering(1004) 00:16:01.693 fused_ordering(1005) 00:16:01.693 fused_ordering(1006) 00:16:01.693 fused_ordering(1007) 00:16:01.693 fused_ordering(1008) 00:16:01.693 fused_ordering(1009) 00:16:01.693 fused_ordering(1010) 00:16:01.693 fused_ordering(1011) 00:16:01.693 fused_ordering(1012) 00:16:01.693 fused_ordering(1013) 00:16:01.693 fused_ordering(1014) 00:16:01.693 fused_ordering(1015) 00:16:01.693 fused_ordering(1016) 00:16:01.693 fused_ordering(1017) 00:16:01.693 fused_ordering(1018) 00:16:01.693 fused_ordering(1019) 00:16:01.693 fused_ordering(1020) 00:16:01.693 fused_ordering(1021) 00:16:01.693 fused_ordering(1022) 00:16:01.693 fused_ordering(1023) 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.693 rmmod nvme_tcp 00:16:01.693 rmmod nvme_fabrics 00:16:01.693 rmmod nvme_keyring 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:01.693 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1505574 ']' 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1505574 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1505574 ']' 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1505574 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505574 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505574' 00:16:01.694 killing process with pid 1505574 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1505574 00:16:01.694 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1505574 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.954 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:04.492 00:16:04.492 real 0m9.544s 00:16:04.492 user 0m6.871s 00:16:04.492 sys 0m4.235s 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.492 ************************************ 00:16:04.492 END TEST nvmf_fused_ordering 00:16:04.492 ************************************ 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.492 ************************************ 00:16:04.492 START TEST nvmf_ns_masking 00:16:04.492 ************************************ 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:04.492 * Looking for test storage... 00:16:04.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:16:04.492 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.492 --rc genhtml_branch_coverage=1 00:16:04.492 --rc genhtml_function_coverage=1 00:16:04.492 --rc genhtml_legend=1 00:16:04.492 --rc geninfo_all_blocks=1 00:16:04.492 --rc geninfo_unexecuted_blocks=1 00:16:04.492 00:16:04.492 ' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.492 --rc genhtml_branch_coverage=1 00:16:04.492 --rc genhtml_function_coverage=1 00:16:04.492 --rc genhtml_legend=1 00:16:04.492 --rc geninfo_all_blocks=1 00:16:04.492 --rc geninfo_unexecuted_blocks=1 00:16:04.492 00:16:04.492 ' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.492 --rc genhtml_branch_coverage=1 00:16:04.492 --rc genhtml_function_coverage=1 00:16:04.492 --rc genhtml_legend=1 00:16:04.492 --rc geninfo_all_blocks=1 00:16:04.492 --rc geninfo_unexecuted_blocks=1 00:16:04.492 00:16:04.492 ' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:04.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.492 --rc genhtml_branch_coverage=1 00:16:04.492 --rc genhtml_function_coverage=1 00:16:04.492 --rc genhtml_legend=1 00:16:04.492 --rc geninfo_all_blocks=1 00:16:04.492 --rc geninfo_unexecuted_blocks=1 00:16:04.492 00:16:04.492 ' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.492 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=67e86a9d-8788-4fc9-869a-81a89a663393 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=020dd762-549e-4c43-9527-7b973f69adcb 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=798c7319-d3eb-4602-8a70-f033e1662352 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.493 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:07.026 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:07.027 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:07.027 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:07.027 Found net devices under 0000:84:00.0: cvl_0_0 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:07.027 Found net devices under 0000:84:00.1: cvl_0_1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:07.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:07.027 00:16:07.027 --- 10.0.0.2 ping statistics --- 00:16:07.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.027 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:07.027 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:16:07.027 00:16:07.027 --- 10.0.0.1 ping statistics --- 00:16:07.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.028 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1508203 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1508203 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1508203 ']' 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.028 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:07.028 [2024-10-07 09:37:01.788849] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:16:07.028 [2024-10-07 09:37:01.788969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.286 [2024-10-07 09:37:01.879862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.286 [2024-10-07 09:37:02.024400] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.286 [2024-10-07 09:37:02.024479] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.286 [2024-10-07 09:37:02.024521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.286 [2024-10-07 09:37:02.024546] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.286 [2024-10-07 09:37:02.024567] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.286 [2024-10-07 09:37:02.025445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.544 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:08.110 [2024-10-07 09:37:02.769513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.110 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:08.110 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:08.110 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:08.368 Malloc1 00:16:08.368 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:08.934 Malloc2 00:16:08.934 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:09.193 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:09.451 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.710 [2024-10-07 09:37:04.422068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.710 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:09.710 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 798c7319-d3eb-4602-8a70-f033e1662352 -a 10.0.0.2 -s 4420 -i 4 00:16:09.970 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.971 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:09.971 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.971 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:09.971 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.870 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.870 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.870 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.128 [ 0]:0x1 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91825fcb4fed41afadd4667a66873fb6 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91825fcb4fed41afadd4667a66873fb6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.128 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:12.386 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:12.386 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.386 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.386 [ 0]:0x1 00:16:12.386 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.386 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91825fcb4fed41afadd4667a66873fb6 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91825fcb4fed41afadd4667a66873fb6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.644 [ 1]:0x2 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.644 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.211 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:13.470 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:13.470 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 798c7319-d3eb-4602-8a70-f033e1662352 -a 10.0.0.2 -s 4420 -i 4 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:13.727 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.624 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.882 [ 0]:0x2 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.882 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:16.448 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:16.448 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.448 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:16.448 [ 0]:0x1 00:16:16.448 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:16.448 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91825fcb4fed41afadd4667a66873fb6 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91825fcb4fed41afadd4667a66873fb6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:16.448 [ 1]:0x2 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.448 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.014 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:17.272 [ 0]:0x2 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:17.272 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.273 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:17.839 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:17.839 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 798c7319-d3eb-4602-8a70-f033e1662352 -a 10.0.0.2 -s 4420 -i 4 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:18.097 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:19.997 [ 0]:0x1 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91825fcb4fed41afadd4667a66873fb6 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91825fcb4fed41afadd4667a66873fb6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:19.997 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.255 [ 1]:0x2 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.255 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.513 [ 0]:0x2 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.513 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:20.771 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.030 [2024-10-07 09:37:15.712701] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:21.030 request: 00:16:21.030 { 00:16:21.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.030 "nsid": 2, 00:16:21.030 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.030 "method": "nvmf_ns_remove_host", 00:16:21.030 "req_id": 1 00:16:21.030 } 00:16:21.030 Got JSON-RPC error response 00:16:21.030 response: 00:16:21.030 { 00:16:21.030 "code": -32602, 00:16:21.030 "message": "Invalid parameters" 00:16:21.030 } 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.030 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.030 [ 0]:0x2 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4481da57c1b4700a094c02708fb6a7a 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4481da57c1b4700a094c02708fb6a7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:21.289 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1509966 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1509966 /var/tmp/host.sock 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1509966 ']' 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.289 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.289 [2024-10-07 09:37:16.082471] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:16:21.289 [2024-10-07 09:37:16.082557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509966 ] 00:16:21.548 [2024-10-07 09:37:16.143161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.548 [2024-10-07 09:37:16.259977] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.806 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.807 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:21.807 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.372 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:22.630 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 67e86a9d-8788-4fc9-869a-81a89a663393 00:16:22.630 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:22.630 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 67E86A9D87884FC9869A81A89A663393 -i 00:16:22.888 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 020dd762-549e-4c43-9527-7b973f69adcb 00:16:22.888 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:22.888 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 020DD762549E4C4395277B973F69ADCB -i 00:16:23.453 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:23.713 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:23.971 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:23.971 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:24.536 nvme0n1 00:16:24.536 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:24.536 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:25.468 nvme1n2 00:16:25.468 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:25.468 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:25.468 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:25.468 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:25.468 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:25.726 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:25.726 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:25.726 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:25.726 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:25.983 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 67e86a9d-8788-4fc9-869a-81a89a663393 == \6\7\e\8\6\a\9\d\-\8\7\8\8\-\4\f\c\9\-\8\6\9\a\-\8\1\a\8\9\a\6\6\3\3\9\3 ]] 00:16:25.983 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:25.983 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:25.983 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 020dd762-549e-4c43-9527-7b973f69adcb == \0\2\0\d\d\7\6\2\-\5\4\9\e\-\4\c\4\3\-\9\5\2\7\-\7\b\9\7\3\f\6\9\a\d\c\b ]] 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1509966 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1509966 ']' 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1509966 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.548 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509966 00:16:26.805 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:26.805 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:26.805 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509966' 00:16:26.805 killing process with pid 1509966 00:16:26.805 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1509966 00:16:26.805 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1509966 00:16:27.063 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.627 rmmod nvme_tcp 00:16:27.627 rmmod nvme_fabrics 00:16:27.627 rmmod nvme_keyring 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1508203 ']' 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1508203 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1508203 ']' 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1508203 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508203 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508203' 00:16:27.627 killing process with pid 1508203 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1508203 00:16:27.627 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1508203 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.194 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.099 00:16:30.099 real 0m25.973s 00:16:30.099 user 0m36.831s 00:16:30.099 sys 0m5.241s 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.099 ************************************ 00:16:30.099 END TEST nvmf_ns_masking 00:16:30.099 ************************************ 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.099 ************************************ 00:16:30.099 START TEST nvmf_nvme_cli 00:16:30.099 ************************************ 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:30.099 * Looking for test storage... 00:16:30.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:30.099 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.358 --rc genhtml_branch_coverage=1 00:16:30.358 --rc genhtml_function_coverage=1 00:16:30.358 --rc genhtml_legend=1 00:16:30.358 --rc geninfo_all_blocks=1 00:16:30.358 --rc geninfo_unexecuted_blocks=1 00:16:30.358 00:16:30.358 ' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.358 --rc genhtml_branch_coverage=1 00:16:30.358 --rc genhtml_function_coverage=1 00:16:30.358 --rc genhtml_legend=1 00:16:30.358 --rc geninfo_all_blocks=1 00:16:30.358 --rc geninfo_unexecuted_blocks=1 00:16:30.358 00:16:30.358 ' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.358 --rc genhtml_branch_coverage=1 00:16:30.358 --rc genhtml_function_coverage=1 00:16:30.358 --rc genhtml_legend=1 00:16:30.358 --rc geninfo_all_blocks=1 00:16:30.358 --rc geninfo_unexecuted_blocks=1 00:16:30.358 00:16:30.358 ' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:30.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.358 --rc genhtml_branch_coverage=1 00:16:30.358 --rc genhtml_function_coverage=1 00:16:30.358 --rc genhtml_legend=1 00:16:30.358 --rc geninfo_all_blocks=1 00:16:30.358 --rc geninfo_unexecuted_blocks=1 00:16:30.358 00:16:30.358 ' 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.358 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.358 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.359 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.965 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:32.966 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:32.966 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:32.966 Found net devices under 0000:84:00.0: cvl_0_0 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:32.966 Found net devices under 0000:84:00.1: cvl_0_1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:32.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:16:32.966 00:16:32.966 --- 10.0.0.2 ping statistics --- 00:16:32.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.966 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:16:32.966 00:16:32.966 --- 10.0.0.1 ping statistics --- 00:16:32.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.966 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1512747 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1512747 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1512747 ']' 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.966 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.966 [2024-10-07 09:37:27.699468] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:16:32.966 [2024-10-07 09:37:27.699560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.966 [2024-10-07 09:37:27.765982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.225 [2024-10-07 09:37:27.876225] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.225 [2024-10-07 09:37:27.876283] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.225 [2024-10-07 09:37:27.876312] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.225 [2024-10-07 09:37:27.876323] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.225 [2024-10-07 09:37:27.876333] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.225 [2024-10-07 09:37:27.877988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.225 [2024-10-07 09:37:27.878015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.225 [2024-10-07 09:37:27.878038] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.225 [2024-10-07 09:37:27.878042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.225 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 [2024-10-07 09:37:28.040381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.482 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.482 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 Malloc0 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 Malloc1 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 [2024-10-07 09:37:28.127496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:33.483 00:16:33.483 Discovery Log Number of Records 2, Generation counter 2 00:16:33.483 =====Discovery Log Entry 0====== 00:16:33.483 trtype: tcp 00:16:33.483 adrfam: ipv4 00:16:33.483 subtype: current discovery subsystem 00:16:33.483 treq: not required 00:16:33.483 portid: 0 00:16:33.483 trsvcid: 4420 00:16:33.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:33.483 traddr: 10.0.0.2 00:16:33.483 eflags: explicit discovery connections, duplicate discovery information 00:16:33.483 sectype: none 00:16:33.483 =====Discovery Log Entry 1====== 00:16:33.483 trtype: tcp 00:16:33.483 adrfam: ipv4 00:16:33.483 subtype: nvme subsystem 00:16:33.483 treq: not required 00:16:33.483 portid: 0 00:16:33.483 trsvcid: 4420 00:16:33.483 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:33.483 traddr: 10.0.0.2 00:16:33.483 eflags: none 00:16:33.483 sectype: none 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:33.483 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:34.421 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:36.318 /dev/nvme0n2 ]] 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.318 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:36.575 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.833 rmmod nvme_tcp 00:16:36.833 rmmod nvme_fabrics 00:16:36.833 rmmod nvme_keyring 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1512747 ']' 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1512747 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1512747 ']' 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1512747 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1512747 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1512747' 00:16:36.833 killing process with pid 1512747 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1512747 00:16:36.833 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1512747 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.399 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:39.299 00:16:39.299 real 0m9.158s 00:16:39.299 user 0m16.659s 00:16:39.299 sys 0m2.767s 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.299 ************************************ 00:16:39.299 END TEST nvmf_nvme_cli 00:16:39.299 ************************************ 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.299 ************************************ 00:16:39.299 START TEST nvmf_vfio_user 00:16:39.299 ************************************ 00:16:39.299 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:39.559 * Looking for test storage... 00:16:39.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.559 --rc genhtml_branch_coverage=1 00:16:39.559 --rc genhtml_function_coverage=1 00:16:39.559 --rc genhtml_legend=1 00:16:39.559 --rc geninfo_all_blocks=1 00:16:39.559 --rc geninfo_unexecuted_blocks=1 00:16:39.559 00:16:39.559 ' 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.559 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1513568 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1513568' 00:16:39.560 Process pid: 1513568 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1513568 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1513568 ']' 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.560 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:39.560 [2024-10-07 09:37:34.325978] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:16:39.560 [2024-10-07 09:37:34.326073] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.819 [2024-10-07 09:37:34.417572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.819 [2024-10-07 09:37:34.534159] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.819 [2024-10-07 09:37:34.534228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.819 [2024-10-07 09:37:34.534245] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.819 [2024-10-07 09:37:34.534259] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.819 [2024-10-07 09:37:34.534271] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.819 [2024-10-07 09:37:34.536222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.819 [2024-10-07 09:37:34.536276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.819 [2024-10-07 09:37:34.536392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.819 [2024-10-07 09:37:34.536395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.753 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.753 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:40.753 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:41.686 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:42.251 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:42.251 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:42.251 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.251 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:42.251 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:42.509 Malloc1 00:16:42.509 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:42.766 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:43.703 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:44.267 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:44.267 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:44.267 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:44.525 Malloc2 00:16:44.525 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:45.092 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:45.350 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:45.607 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:45.607 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:45.607 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:45.608 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:45.608 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:45.608 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:45.608 [2024-10-07 09:37:40.421054] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:16:45.608 [2024-10-07 09:37:40.421104] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514369 ] 00:16:45.868 [2024-10-07 09:37:40.459229] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:45.868 [2024-10-07 09:37:40.467359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:45.868 [2024-10-07 09:37:40.467392] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7008764000 00:16:45.868 [2024-10-07 09:37:40.468351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.469344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.470351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.471353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.472361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.473369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.474374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.475379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:45.868 [2024-10-07 09:37:40.476389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:45.868 [2024-10-07 09:37:40.476410] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7008759000 00:16:45.868 [2024-10-07 09:37:40.477529] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:45.868 [2024-10-07 09:37:40.493564] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:45.868 [2024-10-07 09:37:40.493602] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:45.868 [2024-10-07 09:37:40.498528] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:45.868 [2024-10-07 09:37:40.498588] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:45.868 [2024-10-07 09:37:40.498680] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:45.868 [2024-10-07 09:37:40.498709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:45.868 [2024-10-07 09:37:40.498720] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:45.868 [2024-10-07 09:37:40.499518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:45.868 [2024-10-07 09:37:40.499538] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:45.868 [2024-10-07 09:37:40.499551] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:45.868 [2024-10-07 09:37:40.500526] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:45.868 [2024-10-07 09:37:40.500545] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:45.868 [2024-10-07 09:37:40.500558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.501532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:45.868 [2024-10-07 09:37:40.501550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.502534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:45.868 [2024-10-07 09:37:40.502554] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:45.868 [2024-10-07 09:37:40.502563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.502574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.502683] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:45.868 [2024-10-07 09:37:40.502691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.502699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:45.868 [2024-10-07 09:37:40.503540] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:45.868 [2024-10-07 09:37:40.504543] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:45.868 [2024-10-07 09:37:40.505547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:45.868 [2024-10-07 09:37:40.506545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:45.868 [2024-10-07 09:37:40.506637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:45.868 [2024-10-07 09:37:40.507561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:45.868 [2024-10-07 09:37:40.507580] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:45.868 [2024-10-07 09:37:40.507589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:45.868 [2024-10-07 09:37:40.507612] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:45.868 [2024-10-07 09:37:40.507626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:45.868 [2024-10-07 09:37:40.507649] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.868 [2024-10-07 09:37:40.507659] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.868 [2024-10-07 09:37:40.507665] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.868 [2024-10-07 09:37:40.507682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.868 [2024-10-07 09:37:40.507739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:45.868 [2024-10-07 09:37:40.507755] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:45.868 [2024-10-07 09:37:40.507763] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:45.868 [2024-10-07 09:37:40.507770] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:45.868 [2024-10-07 09:37:40.507777] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:45.868 [2024-10-07 09:37:40.507785] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:45.868 [2024-10-07 09:37:40.507792] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:45.868 [2024-10-07 09:37:40.507799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:45.868 [2024-10-07 09:37:40.507811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:45.868 [2024-10-07 09:37:40.507824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:45.868 [2024-10-07 09:37:40.507842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:45.868 [2024-10-07 09:37:40.507857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.868 [2024-10-07 09:37:40.507869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.868 [2024-10-07 09:37:40.507904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.868 [2024-10-07 09:37:40.507917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.868 [2024-10-07 09:37:40.507926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.507960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.507980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.507993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508004] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:45.869 [2024-10-07 09:37:40.508013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508147] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508160] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:45.869 [2024-10-07 09:37:40.508184] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:45.869 [2024-10-07 09:37:40.508190] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508238] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:45.869 [2024-10-07 09:37:40.508272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508298] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.869 [2024-10-07 09:37:40.508306] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.869 [2024-10-07 09:37:40.508312] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508400] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:45.869 [2024-10-07 09:37:40.508412] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.869 [2024-10-07 09:37:40.508418] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508512] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:45.869 [2024-10-07 09:37:40.508520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:45.869 [2024-10-07 09:37:40.508528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:45.869 [2024-10-07 09:37:40.508551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508668] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:45.869 [2024-10-07 09:37:40.508677] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:45.869 [2024-10-07 09:37:40.508683] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:45.869 [2024-10-07 09:37:40.508689] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:45.869 [2024-10-07 09:37:40.508695] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:45.869 [2024-10-07 09:37:40.508707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:45.869 [2024-10-07 09:37:40.508719] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:45.869 [2024-10-07 09:37:40.508727] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:45.869 [2024-10-07 09:37:40.508733] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508752] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:45.869 [2024-10-07 09:37:40.508760] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:45.869 [2024-10-07 09:37:40.508766] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508785] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:45.869 [2024-10-07 09:37:40.508793] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:45.869 [2024-10-07 09:37:40.508799] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:45.869 [2024-10-07 09:37:40.508807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:45.869 [2024-10-07 09:37:40.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:45.869 [2024-10-07 09:37:40.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:45.869 ===================================================== 00:16:45.869 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:45.869 ===================================================== 00:16:45.869 Controller Capabilities/Features 00:16:45.869 ================================ 00:16:45.869 Vendor ID: 4e58 00:16:45.869 Subsystem Vendor ID: 4e58 00:16:45.869 Serial Number: SPDK1 00:16:45.869 Model Number: SPDK bdev Controller 00:16:45.869 Firmware Version: 25.01 00:16:45.869 Recommended Arb Burst: 6 00:16:45.869 IEEE OUI Identifier: 8d 6b 50 00:16:45.869 Multi-path I/O 00:16:45.869 May have multiple subsystem ports: Yes 00:16:45.869 May have multiple controllers: Yes 00:16:45.869 Associated with SR-IOV VF: No 00:16:45.869 Max Data Transfer Size: 131072 00:16:45.869 Max Number of Namespaces: 32 00:16:45.869 Max Number of I/O Queues: 127 00:16:45.869 NVMe Specification Version (VS): 1.3 00:16:45.869 NVMe Specification Version (Identify): 1.3 00:16:45.869 Maximum Queue Entries: 256 00:16:45.869 Contiguous Queues Required: Yes 00:16:45.869 Arbitration Mechanisms Supported 00:16:45.869 Weighted Round Robin: Not Supported 00:16:45.869 Vendor Specific: Not Supported 00:16:45.869 Reset Timeout: 15000 ms 00:16:45.869 Doorbell Stride: 4 bytes 00:16:45.869 NVM Subsystem Reset: Not Supported 00:16:45.869 Command Sets Supported 00:16:45.870 NVM Command Set: Supported 00:16:45.870 Boot Partition: Not Supported 00:16:45.870 Memory Page Size Minimum: 4096 bytes 00:16:45.870 Memory Page Size Maximum: 4096 bytes 00:16:45.870 Persistent Memory Region: Not Supported 00:16:45.870 Optional Asynchronous Events Supported 00:16:45.870 Namespace Attribute Notices: Supported 00:16:45.870 Firmware Activation Notices: Not Supported 00:16:45.870 ANA Change Notices: Not Supported 00:16:45.870 PLE Aggregate Log Change Notices: Not Supported 00:16:45.870 LBA Status Info Alert Notices: Not Supported 00:16:45.870 EGE Aggregate Log Change Notices: Not Supported 00:16:45.870 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.870 Zone Descriptor Change Notices: Not Supported 00:16:45.870 Discovery Log Change Notices: Not Supported 00:16:45.870 Controller Attributes 00:16:45.870 128-bit Host Identifier: Supported 00:16:45.870 Non-Operational Permissive Mode: Not Supported 00:16:45.870 NVM Sets: Not Supported 00:16:45.870 Read Recovery Levels: Not Supported 00:16:45.870 Endurance Groups: Not Supported 00:16:45.870 Predictable Latency Mode: Not Supported 00:16:45.870 Traffic Based Keep ALive: Not Supported 00:16:45.870 Namespace Granularity: Not Supported 00:16:45.870 SQ Associations: Not Supported 00:16:45.870 UUID List: Not Supported 00:16:45.870 Multi-Domain Subsystem: Not Supported 00:16:45.870 Fixed Capacity Management: Not Supported 00:16:45.870 Variable Capacity Management: Not Supported 00:16:45.870 Delete Endurance Group: Not Supported 00:16:45.870 Delete NVM Set: Not Supported 00:16:45.870 Extended LBA Formats Supported: Not Supported 00:16:45.870 Flexible Data Placement Supported: Not Supported 00:16:45.870 00:16:45.870 Controller Memory Buffer Support 00:16:45.870 ================================ 00:16:45.870 Supported: No 00:16:45.870 00:16:45.870 Persistent Memory Region Support 00:16:45.870 ================================ 00:16:45.870 Supported: No 00:16:45.870 00:16:45.870 Admin Command Set Attributes 00:16:45.870 ============================ 00:16:45.870 Security Send/Receive: Not Supported 00:16:45.870 Format NVM: Not Supported 00:16:45.870 Firmware Activate/Download: Not Supported 00:16:45.870 Namespace Management: Not Supported 00:16:45.870 Device Self-Test: Not Supported 00:16:45.870 Directives: Not Supported 00:16:45.870 NVMe-MI: Not Supported 00:16:45.870 Virtualization Management: Not Supported 00:16:45.870 Doorbell Buffer Config: Not Supported 00:16:45.870 Get LBA Status Capability: Not Supported 00:16:45.870 Command & Feature Lockdown Capability: Not Supported 00:16:45.870 Abort Command Limit: 4 00:16:45.870 Async Event Request Limit: 4 00:16:45.870 Number of Firmware Slots: N/A 00:16:45.870 Firmware Slot 1 Read-Only: N/A 00:16:45.870 Firmware Activation Without Reset: N/A 00:16:45.870 Multiple Update Detection Support: N/A 00:16:45.870 Firmware Update Granularity: No Information Provided 00:16:45.870 Per-Namespace SMART Log: No 00:16:45.870 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.870 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:45.870 Command Effects Log Page: Supported 00:16:45.870 Get Log Page Extended Data: Supported 00:16:45.870 Telemetry Log Pages: Not Supported 00:16:45.870 Persistent Event Log Pages: Not Supported 00:16:45.870 Supported Log Pages Log Page: May Support 00:16:45.870 Commands Supported & Effects Log Page: Not Supported 00:16:45.870 Feature Identifiers & Effects Log Page:May Support 00:16:45.870 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.870 Data Area 4 for Telemetry Log: Not Supported 00:16:45.870 Error Log Page Entries Supported: 128 00:16:45.870 Keep Alive: Supported 00:16:45.870 Keep Alive Granularity: 10000 ms 00:16:45.870 00:16:45.870 NVM Command Set Attributes 00:16:45.870 ========================== 00:16:45.870 Submission Queue Entry Size 00:16:45.870 Max: 64 00:16:45.870 Min: 64 00:16:45.870 Completion Queue Entry Size 00:16:45.870 Max: 16 00:16:45.870 Min: 16 00:16:45.870 Number of Namespaces: 32 00:16:45.870 Compare Command: Supported 00:16:45.870 Write Uncorrectable Command: Not Supported 00:16:45.870 Dataset Management Command: Supported 00:16:45.870 Write Zeroes Command: Supported 00:16:45.870 Set Features Save Field: Not Supported 00:16:45.870 Reservations: Not Supported 00:16:45.870 Timestamp: Not Supported 00:16:45.870 Copy: Supported 00:16:45.870 Volatile Write Cache: Present 00:16:45.870 Atomic Write Unit (Normal): 1 00:16:45.870 Atomic Write Unit (PFail): 1 00:16:45.870 Atomic Compare & Write Unit: 1 00:16:45.870 Fused Compare & Write: Supported 00:16:45.870 Scatter-Gather List 00:16:45.870 SGL Command Set: Supported (Dword aligned) 00:16:45.870 SGL Keyed: Not Supported 00:16:45.870 SGL Bit Bucket Descriptor: Not Supported 00:16:45.870 SGL Metadata Pointer: Not Supported 00:16:45.870 Oversized SGL: Not Supported 00:16:45.870 SGL Metadata Address: Not Supported 00:16:45.870 SGL Offset: Not Supported 00:16:45.870 Transport SGL Data Block: Not Supported 00:16:45.870 Replay Protected Memory Block: Not Supported 00:16:45.870 00:16:45.870 Firmware Slot Information 00:16:45.870 ========================= 00:16:45.870 Active slot: 1 00:16:45.870 Slot 1 Firmware Revision: 25.01 00:16:45.870 00:16:45.870 00:16:45.870 Commands Supported and Effects 00:16:45.870 ============================== 00:16:45.870 Admin Commands 00:16:45.870 -------------- 00:16:45.870 Get Log Page (02h): Supported 00:16:45.870 Identify (06h): Supported 00:16:45.870 Abort (08h): Supported 00:16:45.870 Set Features (09h): Supported 00:16:45.870 Get Features (0Ah): Supported 00:16:45.870 Asynchronous Event Request (0Ch): Supported 00:16:45.870 Keep Alive (18h): Supported 00:16:45.870 I/O Commands 00:16:45.870 ------------ 00:16:45.870 Flush (00h): Supported LBA-Change 00:16:45.870 Write (01h): Supported LBA-Change 00:16:45.870 Read (02h): Supported 00:16:45.870 Compare (05h): Supported 00:16:45.870 Write Zeroes (08h): Supported LBA-Change 00:16:45.870 Dataset Management (09h): Supported LBA-Change 00:16:45.870 Copy (19h): Supported LBA-Change 00:16:45.870 00:16:45.870 Error Log 00:16:45.870 ========= 00:16:45.870 00:16:45.870 Arbitration 00:16:45.870 =========== 00:16:45.870 Arbitration Burst: 1 00:16:45.870 00:16:45.870 Power Management 00:16:45.870 ================ 00:16:45.870 Number of Power States: 1 00:16:45.870 Current Power State: Power State #0 00:16:45.870 Power State #0: 00:16:45.870 Max Power: 0.00 W 00:16:45.870 Non-Operational State: Operational 00:16:45.870 Entry Latency: Not Reported 00:16:45.870 Exit Latency: Not Reported 00:16:45.870 Relative Read Throughput: 0 00:16:45.870 Relative Read Latency: 0 00:16:45.870 Relative Write Throughput: 0 00:16:45.870 Relative Write Latency: 0 00:16:45.870 Idle Power: Not Reported 00:16:45.870 Active Power: Not Reported 00:16:45.870 Non-Operational Permissive Mode: Not Supported 00:16:45.870 00:16:45.870 Health Information 00:16:45.870 ================== 00:16:45.870 Critical Warnings: 00:16:45.870 Available Spare Space: OK 00:16:45.870 Temperature: OK 00:16:45.870 Device Reliability: OK 00:16:45.870 Read Only: No 00:16:45.870 Volatile Memory Backup: OK 00:16:45.870 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:45.870 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:45.870 Available Spare: 0% 00:16:45.870 Available Sp[2024-10-07 09:37:40.509020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:45.870 [2024-10-07 09:37:40.509037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:45.870 [2024-10-07 09:37:40.509081] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:45.870 [2024-10-07 09:37:40.509099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.870 [2024-10-07 09:37:40.509110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.870 [2024-10-07 09:37:40.509120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.870 [2024-10-07 09:37:40.509129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.870 [2024-10-07 09:37:40.512903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:45.870 [2024-10-07 09:37:40.512925] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:45.870 [2024-10-07 09:37:40.513587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:45.870 [2024-10-07 09:37:40.513658] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:45.870 [2024-10-07 09:37:40.513676] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:45.870 [2024-10-07 09:37:40.514599] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:45.871 [2024-10-07 09:37:40.514623] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:45.871 [2024-10-07 09:37:40.514683] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:45.871 [2024-10-07 09:37:40.516632] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:45.871 are Threshold: 0% 00:16:45.871 Life Percentage Used: 0% 00:16:45.871 Data Units Read: 0 00:16:45.871 Data Units Written: 0 00:16:45.871 Host Read Commands: 0 00:16:45.871 Host Write Commands: 0 00:16:45.871 Controller Busy Time: 0 minutes 00:16:45.871 Power Cycles: 0 00:16:45.871 Power On Hours: 0 hours 00:16:45.871 Unsafe Shutdowns: 0 00:16:45.871 Unrecoverable Media Errors: 0 00:16:45.871 Lifetime Error Log Entries: 0 00:16:45.871 Warning Temperature Time: 0 minutes 00:16:45.871 Critical Temperature Time: 0 minutes 00:16:45.871 00:16:45.871 Number of Queues 00:16:45.871 ================ 00:16:45.871 Number of I/O Submission Queues: 127 00:16:45.871 Number of I/O Completion Queues: 127 00:16:45.871 00:16:45.871 Active Namespaces 00:16:45.871 ================= 00:16:45.871 Namespace ID:1 00:16:45.871 Error Recovery Timeout: Unlimited 00:16:45.871 Command Set Identifier: NVM (00h) 00:16:45.871 Deallocate: Supported 00:16:45.871 Deallocated/Unwritten Error: Not Supported 00:16:45.871 Deallocated Read Value: Unknown 00:16:45.871 Deallocate in Write Zeroes: Not Supported 00:16:45.871 Deallocated Guard Field: 0xFFFF 00:16:45.871 Flush: Supported 00:16:45.871 Reservation: Supported 00:16:45.871 Namespace Sharing Capabilities: Multiple Controllers 00:16:45.871 Size (in LBAs): 131072 (0GiB) 00:16:45.871 Capacity (in LBAs): 131072 (0GiB) 00:16:45.871 Utilization (in LBAs): 131072 (0GiB) 00:16:45.871 NGUID: FD69EAF08C4C4127B26F885DAA302255 00:16:45.871 UUID: fd69eaf0-8c4c-4127-b26f-885daa302255 00:16:45.871 Thin Provisioning: Not Supported 00:16:45.871 Per-NS Atomic Units: Yes 00:16:45.871 Atomic Boundary Size (Normal): 0 00:16:45.871 Atomic Boundary Size (PFail): 0 00:16:45.871 Atomic Boundary Offset: 0 00:16:45.871 Maximum Single Source Range Length: 65535 00:16:45.871 Maximum Copy Length: 65535 00:16:45.871 Maximum Source Range Count: 1 00:16:45.871 NGUID/EUI64 Never Reused: No 00:16:45.871 Namespace Write Protected: No 00:16:45.871 Number of LBA Formats: 1 00:16:45.871 Current LBA Format: LBA Format #00 00:16:45.871 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:45.871 00:16:45.871 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:46.129 [2024-10-07 09:37:40.777810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:51.394 Initializing NVMe Controllers 00:16:51.394 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:51.394 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:51.394 Initialization complete. Launching workers. 00:16:51.394 ======================================================== 00:16:51.394 Latency(us) 00:16:51.394 Device Information : IOPS MiB/s Average min max 00:16:51.394 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32535.92 127.09 3933.26 1175.44 7971.56 00:16:51.394 ======================================================== 00:16:51.394 Total : 32535.92 127.09 3933.26 1175.44 7971.56 00:16:51.394 00:16:51.394 [2024-10-07 09:37:45.802232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:51.394 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:51.394 [2024-10-07 09:37:46.066450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:56.656 Initializing NVMe Controllers 00:16:56.656 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:56.656 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:56.656 Initialization complete. Launching workers. 00:16:56.656 ======================================================== 00:16:56.656 Latency(us) 00:16:56.656 Device Information : IOPS MiB/s Average min max 00:16:56.656 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16005.40 62.52 8005.61 4998.62 15976.19 00:16:56.656 ======================================================== 00:16:56.656 Total : 16005.40 62.52 8005.61 4998.62 15976.19 00:16:56.656 00:16:56.656 [2024-10-07 09:37:51.107291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:56.656 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:56.656 [2024-10-07 09:37:51.338430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:01.952 [2024-10-07 09:37:56.407252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:01.952 Initializing NVMe Controllers 00:17:01.952 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:01.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:01.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:01.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:01.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:01.952 Initialization complete. Launching workers. 00:17:01.952 Starting thread on core 2 00:17:01.952 Starting thread on core 3 00:17:01.952 Starting thread on core 1 00:17:01.952 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:02.210 [2024-10-07 09:37:56.770370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:05.523 [2024-10-07 09:37:59.882706] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:05.523 Initializing NVMe Controllers 00:17:05.523 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.523 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.523 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:05.523 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:05.523 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:05.523 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:05.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:05.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:05.523 Initialization complete. Launching workers. 00:17:05.523 Starting thread on core 1 with urgent priority queue 00:17:05.523 Starting thread on core 2 with urgent priority queue 00:17:05.523 Starting thread on core 3 with urgent priority queue 00:17:05.523 Starting thread on core 0 with urgent priority queue 00:17:05.523 SPDK bdev Controller (SPDK1 ) core 0: 5208.33 IO/s 19.20 secs/100000 ios 00:17:05.523 SPDK bdev Controller (SPDK1 ) core 1: 4578.67 IO/s 21.84 secs/100000 ios 00:17:05.523 SPDK bdev Controller (SPDK1 ) core 2: 5175.00 IO/s 19.32 secs/100000 ios 00:17:05.523 SPDK bdev Controller (SPDK1 ) core 3: 5914.00 IO/s 16.91 secs/100000 ios 00:17:05.523 ======================================================== 00:17:05.523 00:17:05.523 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:05.523 [2024-10-07 09:38:00.219458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:05.523 Initializing NVMe Controllers 00:17:05.523 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.523 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:05.523 Namespace ID: 1 size: 0GB 00:17:05.523 Initialization complete. 00:17:05.523 INFO: using host memory buffer for IO 00:17:05.523 Hello world! 00:17:05.523 [2024-10-07 09:38:00.261097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:05.523 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:06.090 [2024-10-07 09:38:00.648358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:07.022 Initializing NVMe Controllers 00:17:07.022 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.022 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.022 Initialization complete. Launching workers. 00:17:07.022 submit (in ns) avg, min, max = 9268.8, 3530.0, 4029750.0 00:17:07.022 complete (in ns) avg, min, max = 24981.8, 2065.6, 5012431.1 00:17:07.022 00:17:07.022 Submit histogram 00:17:07.022 ================ 00:17:07.022 Range in us Cumulative Count 00:17:07.022 3.508 - 3.532: 0.0155% ( 2) 00:17:07.022 3.532 - 3.556: 0.1319% ( 15) 00:17:07.022 3.556 - 3.579: 1.0472% ( 118) 00:17:07.022 3.579 - 3.603: 2.7849% ( 224) 00:17:07.022 3.603 - 3.627: 8.5951% ( 749) 00:17:07.022 3.627 - 3.650: 15.7707% ( 925) 00:17:07.022 3.650 - 3.674: 26.4681% ( 1379) 00:17:07.022 3.674 - 3.698: 35.2727% ( 1135) 00:17:07.022 3.698 - 3.721: 44.5272% ( 1193) 00:17:07.022 3.721 - 3.745: 50.2676% ( 740) 00:17:07.022 3.745 - 3.769: 55.0306% ( 614) 00:17:07.022 3.769 - 3.793: 58.9016% ( 499) 00:17:07.022 3.793 - 3.816: 61.8959% ( 386) 00:17:07.022 3.816 - 3.840: 65.1462% ( 419) 00:17:07.022 3.840 - 3.864: 68.8232% ( 474) 00:17:07.022 3.864 - 3.887: 72.9268% ( 529) 00:17:07.022 3.887 - 3.911: 77.6045% ( 603) 00:17:07.022 3.911 - 3.935: 81.6384% ( 520) 00:17:07.022 3.935 - 3.959: 84.5551% ( 376) 00:17:07.022 3.959 - 3.982: 86.7504% ( 283) 00:17:07.022 3.982 - 4.006: 88.6432% ( 244) 00:17:07.022 4.006 - 4.030: 90.0706% ( 184) 00:17:07.022 4.030 - 4.053: 91.0868% ( 131) 00:17:07.022 4.053 - 4.077: 92.0720% ( 127) 00:17:07.022 4.077 - 4.101: 92.9175% ( 109) 00:17:07.022 4.101 - 4.124: 93.9105% ( 128) 00:17:07.022 4.124 - 4.148: 94.6707% ( 98) 00:17:07.022 4.148 - 4.172: 95.2370% ( 73) 00:17:07.022 4.172 - 4.196: 95.6093% ( 48) 00:17:07.022 4.196 - 4.219: 95.8653% ( 33) 00:17:07.022 4.219 - 4.243: 96.0825% ( 28) 00:17:07.022 4.243 - 4.267: 96.2377% ( 20) 00:17:07.022 4.267 - 4.290: 96.3463% ( 14) 00:17:07.022 4.290 - 4.314: 96.4937% ( 19) 00:17:07.022 4.314 - 4.338: 96.6643% ( 22) 00:17:07.022 4.338 - 4.361: 96.7497% ( 11) 00:17:07.022 4.361 - 4.385: 96.8272% ( 10) 00:17:07.022 4.385 - 4.409: 96.8893% ( 8) 00:17:07.022 4.409 - 4.433: 96.9436% ( 7) 00:17:07.022 4.433 - 4.456: 96.9979% ( 7) 00:17:07.022 4.456 - 4.480: 97.0212% ( 3) 00:17:07.022 4.480 - 4.504: 97.0367% ( 2) 00:17:07.022 4.504 - 4.527: 97.0910% ( 7) 00:17:07.022 4.527 - 4.551: 97.0988% ( 1) 00:17:07.022 4.551 - 4.575: 97.1143% ( 2) 00:17:07.022 4.575 - 4.599: 97.1220% ( 1) 00:17:07.022 4.599 - 4.622: 97.1375% ( 2) 00:17:07.022 4.622 - 4.646: 97.1531% ( 2) 00:17:07.022 4.646 - 4.670: 97.1763% ( 3) 00:17:07.022 4.670 - 4.693: 97.1918% ( 2) 00:17:07.022 4.717 - 4.741: 97.2306% ( 5) 00:17:07.022 4.741 - 4.764: 97.2694% ( 5) 00:17:07.022 4.764 - 4.788: 97.3004% ( 4) 00:17:07.022 4.788 - 4.812: 97.3780% ( 10) 00:17:07.022 4.812 - 4.836: 97.4168% ( 5) 00:17:07.022 4.836 - 4.859: 97.4944% ( 10) 00:17:07.022 4.859 - 4.883: 97.5875% ( 12) 00:17:07.022 4.883 - 4.907: 97.6495% ( 8) 00:17:07.022 4.930 - 4.954: 97.6806% ( 4) 00:17:07.022 4.954 - 4.978: 97.7038% ( 3) 00:17:07.022 4.978 - 5.001: 97.7426% ( 5) 00:17:07.022 5.001 - 5.025: 97.7892% ( 6) 00:17:07.022 5.025 - 5.049: 97.8124% ( 3) 00:17:07.022 5.049 - 5.073: 97.8202% ( 1) 00:17:07.022 5.073 - 5.096: 97.8435% ( 3) 00:17:07.022 5.096 - 5.120: 97.8745% ( 4) 00:17:07.022 5.120 - 5.144: 97.8822% ( 1) 00:17:07.022 5.144 - 5.167: 97.8978% ( 2) 00:17:07.022 5.167 - 5.191: 97.9133% ( 2) 00:17:07.022 5.191 - 5.215: 97.9443% ( 4) 00:17:07.022 5.215 - 5.239: 97.9831% ( 5) 00:17:07.022 5.262 - 5.286: 97.9986% ( 2) 00:17:07.022 5.286 - 5.310: 98.0064% ( 1) 00:17:07.022 5.310 - 5.333: 98.0219% ( 2) 00:17:07.022 5.357 - 5.381: 98.0451% ( 3) 00:17:07.022 5.404 - 5.428: 98.0529% ( 1) 00:17:07.022 5.476 - 5.499: 98.0607% ( 1) 00:17:07.022 5.499 - 5.523: 98.0684% ( 1) 00:17:07.022 5.594 - 5.618: 98.0762% ( 1) 00:17:07.022 5.618 - 5.641: 98.0839% ( 1) 00:17:07.022 5.665 - 5.689: 98.0917% ( 1) 00:17:07.022 5.736 - 5.760: 98.0994% ( 1) 00:17:07.022 5.879 - 5.902: 98.1072% ( 1) 00:17:07.022 5.950 - 5.973: 98.1150% ( 1) 00:17:07.022 5.973 - 5.997: 98.1227% ( 1) 00:17:07.022 5.997 - 6.021: 98.1305% ( 1) 00:17:07.022 6.400 - 6.447: 98.1382% ( 1) 00:17:07.022 6.542 - 6.590: 98.1460% ( 1) 00:17:07.022 6.874 - 6.921: 98.1538% ( 1) 00:17:07.022 6.921 - 6.969: 98.1693% ( 2) 00:17:07.022 7.016 - 7.064: 98.1770% ( 1) 00:17:07.022 7.064 - 7.111: 98.1848% ( 1) 00:17:07.022 7.111 - 7.159: 98.1925% ( 1) 00:17:07.022 7.159 - 7.206: 98.2003% ( 1) 00:17:07.022 7.253 - 7.301: 98.2081% ( 1) 00:17:07.022 7.396 - 7.443: 98.2158% ( 1) 00:17:07.022 7.538 - 7.585: 98.2236% ( 1) 00:17:07.022 7.680 - 7.727: 98.2313% ( 1) 00:17:07.022 7.822 - 7.870: 98.2391% ( 1) 00:17:07.022 7.870 - 7.917: 98.2546% ( 2) 00:17:07.022 8.154 - 8.201: 98.2701% ( 2) 00:17:07.022 8.296 - 8.344: 98.2779% ( 1) 00:17:07.022 8.391 - 8.439: 98.2856% ( 1) 00:17:07.022 8.486 - 8.533: 98.2934% ( 1) 00:17:07.022 8.533 - 8.581: 98.3011% ( 1) 00:17:07.022 8.581 - 8.628: 98.3244% ( 3) 00:17:07.022 8.628 - 8.676: 98.3399% ( 2) 00:17:07.022 8.676 - 8.723: 98.3554% ( 2) 00:17:07.022 8.723 - 8.770: 98.3710% ( 2) 00:17:07.022 8.818 - 8.865: 98.3787% ( 1) 00:17:07.022 8.913 - 8.960: 98.3865% ( 1) 00:17:07.022 9.007 - 9.055: 98.4020% ( 2) 00:17:07.022 9.102 - 9.150: 98.4097% ( 1) 00:17:07.022 9.150 - 9.197: 98.4408% ( 4) 00:17:07.022 9.197 - 9.244: 98.4485% ( 1) 00:17:07.022 9.244 - 9.292: 98.4563% ( 1) 00:17:07.022 9.292 - 9.339: 98.4873% ( 4) 00:17:07.022 9.339 - 9.387: 98.5106% ( 3) 00:17:07.022 9.387 - 9.434: 98.5183% ( 1) 00:17:07.022 9.434 - 9.481: 98.5261% ( 1) 00:17:07.022 9.481 - 9.529: 98.5339% ( 1) 00:17:07.022 9.529 - 9.576: 98.5494% ( 2) 00:17:07.022 9.671 - 9.719: 98.5571% ( 1) 00:17:07.022 9.766 - 9.813: 98.5649% ( 1) 00:17:07.022 9.908 - 9.956: 98.5726% ( 1) 00:17:07.022 9.956 - 10.003: 98.5882% ( 2) 00:17:07.022 10.050 - 10.098: 98.5959% ( 1) 00:17:07.022 10.145 - 10.193: 98.6037% ( 1) 00:17:07.022 10.335 - 10.382: 98.6114% ( 1) 00:17:07.022 10.430 - 10.477: 98.6192% ( 1) 00:17:07.022 10.524 - 10.572: 98.6269% ( 1) 00:17:07.022 10.856 - 10.904: 98.6347% ( 1) 00:17:07.022 11.093 - 11.141: 98.6425% ( 1) 00:17:07.022 11.141 - 11.188: 98.6502% ( 1) 00:17:07.022 11.236 - 11.283: 98.6580% ( 1) 00:17:07.022 11.330 - 11.378: 98.6657% ( 1) 00:17:07.022 11.520 - 11.567: 98.6735% ( 1) 00:17:07.022 11.662 - 11.710: 98.6890% ( 2) 00:17:07.022 11.804 - 11.852: 98.6968% ( 1) 00:17:07.022 11.947 - 11.994: 98.7045% ( 1) 00:17:07.022 12.041 - 12.089: 98.7200% ( 2) 00:17:07.022 12.421 - 12.516: 98.7278% ( 1) 00:17:07.022 12.516 - 12.610: 98.7356% ( 1) 00:17:07.022 12.610 - 12.705: 98.7511% ( 2) 00:17:07.022 12.895 - 12.990: 98.7588% ( 1) 00:17:07.023 13.084 - 13.179: 98.7899% ( 4) 00:17:07.023 13.179 - 13.274: 98.8054% ( 2) 00:17:07.023 13.369 - 13.464: 98.8131% ( 1) 00:17:07.023 13.559 - 13.653: 98.8209% ( 1) 00:17:07.023 13.653 - 13.748: 98.8286% ( 1) 00:17:07.023 13.748 - 13.843: 98.8364% ( 1) 00:17:07.023 14.033 - 14.127: 98.8519% ( 2) 00:17:07.023 14.127 - 14.222: 98.8597% ( 1) 00:17:07.023 14.507 - 14.601: 98.8752% ( 2) 00:17:07.023 14.791 - 14.886: 98.8829% ( 1) 00:17:07.023 15.076 - 15.170: 98.8907% ( 1) 00:17:07.023 16.972 - 17.067: 98.8985% ( 1) 00:17:07.023 17.067 - 17.161: 98.9062% ( 1) 00:17:07.023 17.161 - 17.256: 98.9217% ( 2) 00:17:07.023 17.256 - 17.351: 98.9295% ( 1) 00:17:07.023 17.351 - 17.446: 98.9372% ( 1) 00:17:07.023 17.541 - 17.636: 98.9915% ( 7) 00:17:07.023 17.636 - 17.730: 99.0381% ( 6) 00:17:07.023 17.825 - 17.920: 99.0924% ( 7) 00:17:07.023 17.920 - 18.015: 99.1467% ( 7) 00:17:07.023 18.015 - 18.110: 99.2165% ( 9) 00:17:07.023 18.110 - 18.204: 99.2631% ( 6) 00:17:07.023 18.204 - 18.299: 99.3251% ( 8) 00:17:07.023 18.299 - 18.394: 99.4027% ( 10) 00:17:07.023 18.394 - 18.489: 99.4880% ( 11) 00:17:07.023 18.489 - 18.584: 99.5578% ( 9) 00:17:07.023 18.584 - 18.679: 99.6276% ( 9) 00:17:07.023 18.679 - 18.773: 99.6664% ( 5) 00:17:07.023 18.773 - 18.868: 99.7130% ( 6) 00:17:07.023 18.868 - 18.963: 99.7440% ( 4) 00:17:07.023 18.963 - 19.058: 99.7750% ( 4) 00:17:07.023 19.058 - 19.153: 99.7906% ( 2) 00:17:07.023 19.153 - 19.247: 99.7983% ( 1) 00:17:07.023 19.247 - 19.342: 99.8061% ( 1) 00:17:07.023 19.437 - 19.532: 99.8138% ( 1) 00:17:07.023 19.627 - 19.721: 99.8216% ( 1) 00:17:07.023 20.859 - 20.954: 99.8293% ( 1) 00:17:07.023 22.945 - 23.040: 99.8371% ( 1) 00:17:07.023 25.979 - 26.169: 99.8449% ( 1) 00:17:07.023 26.359 - 26.548: 99.8526% ( 1) 00:17:07.023 28.634 - 28.824: 99.8604% ( 1) 00:17:07.023 29.772 - 29.961: 99.8681% ( 1) 00:17:07.023 3980.705 - 4004.978: 99.9767% ( 14) 00:17:07.023 4004.978 - 4029.250: 99.9922% ( 2) 00:17:07.023 4029.250 - 4053.523: 100.0000% ( 1) 00:17:07.023 00:17:07.023 Complete histogram 00:17:07.023 ================== 00:17:07.023 Range in us Cumulative Count 00:17:07.023 2.062 - 2.074: 2.7849% ( 359) 00:17:07.023 2.074 - 2.086: 40.9588% ( 4921) 00:17:07.023 2.086 - 2.098: 49.0109% ( 1038) 00:17:07.023 2.098 - 2.110: 51.2916% ( 294) 00:17:07.023 2.110 - 2.121: 57.8466% ( 845) 00:17:07.023 2.121 - 2.133: 60.0729% ( 287) 00:17:07.023 2.133 - 2.145: 65.4022% ( 687) 00:17:07.023 2.145 - 2.157: 76.0375% ( 1371) 00:17:07.023 2.157 - 2.169: 77.4339% ( 180) 00:17:07.023 2.169 - 2.181: 78.7914% ( 175) 00:17:07.023 2.181 - 2.193: 80.9014% ( 272) 00:17:07.023 2.193 - 2.204: 81.7702% ( 112) 00:17:07.023 2.204 - 2.216: 83.4303% ( 214) 00:17:07.023 2.216 - 2.228: 88.6820% ( 677) 00:17:07.023 2.228 - 2.240: 91.0325% ( 303) 00:17:07.023 2.240 - 2.252: 92.2582% ( 158) 00:17:07.023 2.252 - 2.264: 93.1658% ( 117) 00:17:07.023 2.264 - 2.276: 93.4218% ( 33) 00:17:07.023 2.276 - 2.287: 93.8639% ( 57) 00:17:07.023 2.287 - 2.299: 94.2596% ( 51) 00:17:07.023 2.299 - 2.311: 94.9655% ( 91) 00:17:07.023 2.311 - 2.323: 95.2758% ( 40) 00:17:07.023 2.323 - 2.335: 95.3223% ( 6) 00:17:07.023 2.335 - 2.347: 95.4076% ( 11) 00:17:07.023 2.347 - 2.359: 95.4620% ( 7) 00:17:07.023 2.359 - 2.370: 95.6171% ( 20) 00:17:07.023 2.370 - 2.382: 95.8964% ( 36) 00:17:07.023 2.382 - 2.394: 96.4626% ( 73) 00:17:07.023 2.394 - 2.406: 96.7962% ( 43) 00:17:07.023 2.406 - 2.418: 96.9746% ( 23) 00:17:07.023 2.418 - 2.430: 97.1453% ( 22) 00:17:07.023 2.430 - 2.441: 97.3547% ( 27) 00:17:07.023 2.441 - 2.453: 97.5487% ( 25) 00:17:07.023 2.453 - 2.465: 97.6883% ( 18) 00:17:07.023 2.465 - 2.477: 97.7892% ( 13) 00:17:07.023 2.477 - 2.489: 97.9133% ( 16) 00:17:07.023 2.489 - 2.501: 98.0064% ( 12) 00:17:07.023 2.501 - 2.513: 98.1150% ( 14) 00:17:07.023 2.513 - 2.524: 98.1615% ( 6) 00:17:07.023 2.524 - 2.536: 98.2391% ( 10) 00:17:07.023 2.536 - 2.548: 98.2934% ( 7) 00:17:07.023 2.548 - 2.560: 98.3011% ( 1) 00:17:07.023 2.560 - 2.572: 98.3244% ( 3) 00:17:07.023 2.572 - 2.584: 98.3477% ( 3) 00:17:07.023 2.584 - 2.596: 98.3554% ( 1) 00:17:07.023 2.619 - 2.631: 98.3710% ( 2) 00:17:07.023 2.631 - 2.643: 98.3865% ( 2) 00:17:07.023 2.667 - 2.679: 98.4020% ( 2) 00:17:07.023 2.714 - 2.726: 98.4097% ( 1) 00:17:07.023 2.726 - 2.738: 98.4175% ( 1) 00:17:07.023 2.797 - 2.809: 98.4253% ( 1) 00:17:07.023 2.809 - 2.821: 98.4330% ( 1) 00:17:07.023 2.821 - 2.833: 98.4408% ( 1) 00:17:07.023 2.844 - 2.856: 98.4485% ( 1) 00:17:07.023 2.856 - 2.868: 98.4563% ( 1) 00:17:07.023 2.868 - 2.880: 98.4640% ( 1) 00:17:07.023 2.892 - 2.904: 98.4718% ( 1) 00:17:07.023 3.366 - 3.390: 98.4873% ( 2) 00:17:07.023 3.437 - 3.461: 98.4951% ( 1) 00:17:07.023 3.461 - 3.484: 98.5028% ( 1) 00:17:07.023 3.484 - 3.508: 98.5106% ( 1) 00:17:07.023 3.508 - 3.532: 9[2024-10-07 09:38:01.672482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:07.023 8.5261% ( 2) 00:17:07.023 3.556 - 3.579: 98.5416% ( 2) 00:17:07.023 3.579 - 3.603: 98.5494% ( 1) 00:17:07.023 3.603 - 3.627: 98.5571% ( 1) 00:17:07.023 3.627 - 3.650: 98.5649% ( 1) 00:17:07.023 3.650 - 3.674: 98.5804% ( 2) 00:17:07.023 3.674 - 3.698: 98.5882% ( 1) 00:17:07.023 3.698 - 3.721: 98.5959% ( 1) 00:17:07.023 3.721 - 3.745: 98.6037% ( 1) 00:17:07.023 3.745 - 3.769: 98.6114% ( 1) 00:17:07.023 3.769 - 3.793: 98.6192% ( 1) 00:17:07.023 3.793 - 3.816: 98.6269% ( 1) 00:17:07.023 3.816 - 3.840: 98.6425% ( 2) 00:17:07.023 3.864 - 3.887: 98.6502% ( 1) 00:17:07.023 4.030 - 4.053: 98.6580% ( 1) 00:17:07.023 4.409 - 4.433: 98.6657% ( 1) 00:17:07.023 6.590 - 6.637: 98.6813% ( 2) 00:17:07.023 6.779 - 6.827: 98.6890% ( 1) 00:17:07.023 6.921 - 6.969: 98.7045% ( 2) 00:17:07.023 6.969 - 7.016: 98.7123% ( 1) 00:17:07.023 7.111 - 7.159: 98.7200% ( 1) 00:17:07.023 7.348 - 7.396: 98.7278% ( 1) 00:17:07.023 7.396 - 7.443: 98.7356% ( 1) 00:17:07.023 7.490 - 7.538: 98.7433% ( 1) 00:17:07.023 7.727 - 7.775: 98.7511% ( 1) 00:17:07.023 7.917 - 7.964: 98.7588% ( 1) 00:17:07.023 8.296 - 8.344: 98.7666% ( 1) 00:17:07.023 8.344 - 8.391: 98.7743% ( 1) 00:17:07.023 9.102 - 9.150: 98.7821% ( 1) 00:17:07.023 9.292 - 9.339: 98.7899% ( 1) 00:17:07.023 9.576 - 9.624: 98.7976% ( 1) 00:17:07.023 9.624 - 9.671: 98.8054% ( 1) 00:17:07.023 9.671 - 9.719: 98.8131% ( 1) 00:17:07.023 10.050 - 10.098: 98.8209% ( 1) 00:17:07.023 15.360 - 15.455: 98.8286% ( 1) 00:17:07.023 15.550 - 15.644: 98.8442% ( 2) 00:17:07.023 15.644 - 15.739: 98.8519% ( 1) 00:17:07.023 15.739 - 15.834: 98.8674% ( 2) 00:17:07.023 15.929 - 16.024: 98.8752% ( 1) 00:17:07.023 16.024 - 16.119: 98.9062% ( 4) 00:17:07.023 16.119 - 16.213: 98.9372% ( 4) 00:17:07.023 16.213 - 16.308: 98.9683% ( 4) 00:17:07.023 16.308 - 16.403: 98.9993% ( 4) 00:17:07.023 16.403 - 16.498: 99.0381% ( 5) 00:17:07.023 16.498 - 16.593: 99.0924% ( 7) 00:17:07.023 16.593 - 16.687: 99.1079% ( 2) 00:17:07.023 16.687 - 16.782: 99.1467% ( 5) 00:17:07.023 16.782 - 16.877: 99.2010% ( 7) 00:17:07.023 16.877 - 16.972: 99.2088% ( 1) 00:17:07.023 16.972 - 17.067: 99.2165% ( 1) 00:17:07.023 17.067 - 17.161: 99.2320% ( 2) 00:17:07.023 17.161 - 17.256: 99.2708% ( 5) 00:17:07.023 17.256 - 17.351: 99.2863% ( 2) 00:17:07.023 17.351 - 17.446: 99.3096% ( 3) 00:17:07.023 17.446 - 17.541: 99.3251% ( 2) 00:17:07.023 17.541 - 17.636: 99.3329% ( 1) 00:17:07.023 17.636 - 17.730: 99.3406% ( 1) 00:17:07.023 17.825 - 17.920: 99.3484% ( 1) 00:17:07.023 17.920 - 18.015: 99.3561% ( 1) 00:17:07.023 18.015 - 18.110: 99.3639% ( 1) 00:17:07.023 18.110 - 18.204: 99.3717% ( 1) 00:17:07.023 18.204 - 18.299: 99.3794% ( 1) 00:17:07.023 18.299 - 18.394: 99.3872% ( 1) 00:17:07.023 18.489 - 18.584: 99.3949% ( 1) 00:17:07.023 18.679 - 18.773: 99.4027% ( 1) 00:17:07.023 19.247 - 19.342: 99.4104% ( 1) 00:17:07.023 21.144 - 21.239: 99.4182% ( 1) 00:17:07.023 24.273 - 24.462: 99.4260% ( 1) 00:17:07.023 27.496 - 27.686: 99.4337% ( 1) 00:17:07.023 3046.210 - 3058.347: 99.4415% ( 1) 00:17:07.023 3932.160 - 3956.433: 99.4492% ( 1) 00:17:07.023 3980.705 - 4004.978: 99.8216% ( 48) 00:17:07.023 4004.978 - 4029.250: 99.9845% ( 21) 00:17:07.023 5000.154 - 5024.427: 100.0000% ( 2) 00:17:07.023 00:17:07.023 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:07.023 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:07.024 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:07.024 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:07.024 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:07.588 [ 00:17:07.588 { 00:17:07.588 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:07.588 "subtype": "Discovery", 00:17:07.588 "listen_addresses": [], 00:17:07.588 "allow_any_host": true, 00:17:07.588 "hosts": [] 00:17:07.588 }, 00:17:07.588 { 00:17:07.588 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:07.588 "subtype": "NVMe", 00:17:07.588 "listen_addresses": [ 00:17:07.588 { 00:17:07.588 "trtype": "VFIOUSER", 00:17:07.588 "adrfam": "IPv4", 00:17:07.588 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:07.588 "trsvcid": "0" 00:17:07.588 } 00:17:07.588 ], 00:17:07.588 "allow_any_host": true, 00:17:07.588 "hosts": [], 00:17:07.588 "serial_number": "SPDK1", 00:17:07.588 "model_number": "SPDK bdev Controller", 00:17:07.588 "max_namespaces": 32, 00:17:07.588 "min_cntlid": 1, 00:17:07.588 "max_cntlid": 65519, 00:17:07.588 "namespaces": [ 00:17:07.588 { 00:17:07.588 "nsid": 1, 00:17:07.588 "bdev_name": "Malloc1", 00:17:07.588 "name": "Malloc1", 00:17:07.588 "nguid": "FD69EAF08C4C4127B26F885DAA302255", 00:17:07.588 "uuid": "fd69eaf0-8c4c-4127-b26f-885daa302255" 00:17:07.588 } 00:17:07.588 ] 00:17:07.588 }, 00:17:07.588 { 00:17:07.588 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:07.588 "subtype": "NVMe", 00:17:07.588 "listen_addresses": [ 00:17:07.588 { 00:17:07.588 "trtype": "VFIOUSER", 00:17:07.588 "adrfam": "IPv4", 00:17:07.588 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:07.588 "trsvcid": "0" 00:17:07.588 } 00:17:07.588 ], 00:17:07.588 "allow_any_host": true, 00:17:07.588 "hosts": [], 00:17:07.588 "serial_number": "SPDK2", 00:17:07.588 "model_number": "SPDK bdev Controller", 00:17:07.588 "max_namespaces": 32, 00:17:07.588 "min_cntlid": 1, 00:17:07.588 "max_cntlid": 65519, 00:17:07.588 "namespaces": [ 00:17:07.588 { 00:17:07.588 "nsid": 1, 00:17:07.588 "bdev_name": "Malloc2", 00:17:07.588 "name": "Malloc2", 00:17:07.588 "nguid": "C29DF19E502B49228F4A0983AB7CB8D8", 00:17:07.588 "uuid": "c29df19e-502b-4922-8f4a-0983ab7cb8d8" 00:17:07.588 } 00:17:07.588 ] 00:17:07.588 } 00:17:07.588 ] 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1516814 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:07.588 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:07.846 [2024-10-07 09:38:02.445495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:08.411 Malloc3 00:17:08.411 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:08.669 [2024-10-07 09:38:03.416791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:08.669 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:08.669 Asynchronous Event Request test 00:17:08.669 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.669 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.669 Registering asynchronous event callbacks... 00:17:08.669 Starting namespace attribute notice tests for all controllers... 00:17:08.669 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:08.669 aer_cb - Changed Namespace 00:17:08.669 Cleaning up... 00:17:09.234 [ 00:17:09.234 { 00:17:09.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:09.234 "subtype": "Discovery", 00:17:09.234 "listen_addresses": [], 00:17:09.234 "allow_any_host": true, 00:17:09.234 "hosts": [] 00:17:09.234 }, 00:17:09.234 { 00:17:09.234 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:09.234 "subtype": "NVMe", 00:17:09.234 "listen_addresses": [ 00:17:09.234 { 00:17:09.234 "trtype": "VFIOUSER", 00:17:09.234 "adrfam": "IPv4", 00:17:09.234 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:09.234 "trsvcid": "0" 00:17:09.234 } 00:17:09.234 ], 00:17:09.234 "allow_any_host": true, 00:17:09.234 "hosts": [], 00:17:09.234 "serial_number": "SPDK1", 00:17:09.234 "model_number": "SPDK bdev Controller", 00:17:09.234 "max_namespaces": 32, 00:17:09.234 "min_cntlid": 1, 00:17:09.234 "max_cntlid": 65519, 00:17:09.234 "namespaces": [ 00:17:09.234 { 00:17:09.234 "nsid": 1, 00:17:09.234 "bdev_name": "Malloc1", 00:17:09.234 "name": "Malloc1", 00:17:09.234 "nguid": "FD69EAF08C4C4127B26F885DAA302255", 00:17:09.234 "uuid": "fd69eaf0-8c4c-4127-b26f-885daa302255" 00:17:09.234 }, 00:17:09.234 { 00:17:09.234 "nsid": 2, 00:17:09.234 "bdev_name": "Malloc3", 00:17:09.234 "name": "Malloc3", 00:17:09.234 "nguid": "C0410748D76A4E2D8E7384DE135A7604", 00:17:09.234 "uuid": "c0410748-d76a-4e2d-8e73-84de135a7604" 00:17:09.234 } 00:17:09.234 ] 00:17:09.234 }, 00:17:09.234 { 00:17:09.234 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:09.234 "subtype": "NVMe", 00:17:09.234 "listen_addresses": [ 00:17:09.234 { 00:17:09.234 "trtype": "VFIOUSER", 00:17:09.235 "adrfam": "IPv4", 00:17:09.235 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:09.235 "trsvcid": "0" 00:17:09.235 } 00:17:09.235 ], 00:17:09.235 "allow_any_host": true, 00:17:09.235 "hosts": [], 00:17:09.235 "serial_number": "SPDK2", 00:17:09.235 "model_number": "SPDK bdev Controller", 00:17:09.235 "max_namespaces": 32, 00:17:09.235 "min_cntlid": 1, 00:17:09.235 "max_cntlid": 65519, 00:17:09.235 "namespaces": [ 00:17:09.235 { 00:17:09.235 "nsid": 1, 00:17:09.235 "bdev_name": "Malloc2", 00:17:09.235 "name": "Malloc2", 00:17:09.235 "nguid": "C29DF19E502B49228F4A0983AB7CB8D8", 00:17:09.235 "uuid": "c29df19e-502b-4922-8f4a-0983ab7cb8d8" 00:17:09.235 } 00:17:09.235 ] 00:17:09.235 } 00:17:09.235 ] 00:17:09.235 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1516814 00:17:09.235 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:09.235 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:09.235 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:09.235 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:09.235 [2024-10-07 09:38:03.935749] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:17:09.235 [2024-10-07 09:38:03.935843] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517018 ] 00:17:09.235 [2024-10-07 09:38:03.983770] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:09.235 [2024-10-07 09:38:03.992225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.235 [2024-10-07 09:38:03.992261] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc76f2ba000 00:17:09.235 [2024-10-07 09:38:03.993221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.994232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.995238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.996249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.997257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.998277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.235 [2024-10-07 09:38:03.999272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.235 [2024-10-07 09:38:04.000279] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.235 [2024-10-07 09:38:04.001291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.235 [2024-10-07 09:38:04.001313] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc76f2af000 00:17:09.235 [2024-10-07 09:38:04.002467] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:09.235 [2024-10-07 09:38:04.017544] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:09.235 [2024-10-07 09:38:04.017581] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:09.235 [2024-10-07 09:38:04.022698] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:09.235 [2024-10-07 09:38:04.022749] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:09.235 [2024-10-07 09:38:04.022834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:09.235 [2024-10-07 09:38:04.022860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:09.235 [2024-10-07 09:38:04.022885] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:09.235 [2024-10-07 09:38:04.023704] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:09.235 [2024-10-07 09:38:04.023729] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:09.235 [2024-10-07 09:38:04.023743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:09.235 [2024-10-07 09:38:04.024706] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:09.235 [2024-10-07 09:38:04.024726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:09.235 [2024-10-07 09:38:04.024740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.025710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:09.235 [2024-10-07 09:38:04.025731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.026719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:09.235 [2024-10-07 09:38:04.026739] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:09.235 [2024-10-07 09:38:04.026748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.026759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.026869] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:09.235 [2024-10-07 09:38:04.026899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.026909] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:09.235 [2024-10-07 09:38:04.027728] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:09.235 [2024-10-07 09:38:04.028736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:09.235 [2024-10-07 09:38:04.029746] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:09.235 [2024-10-07 09:38:04.030741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:09.235 [2024-10-07 09:38:04.030825] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:09.235 [2024-10-07 09:38:04.031759] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:09.235 [2024-10-07 09:38:04.031779] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:09.235 [2024-10-07 09:38:04.031788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:09.235 [2024-10-07 09:38:04.031811] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:09.235 [2024-10-07 09:38:04.031828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:09.235 [2024-10-07 09:38:04.031853] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.235 [2024-10-07 09:38:04.031864] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.235 [2024-10-07 09:38:04.031885] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.235 [2024-10-07 09:38:04.031910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.235 [2024-10-07 09:38:04.039902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:09.235 [2024-10-07 09:38:04.039925] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:09.235 [2024-10-07 09:38:04.039949] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:09.235 [2024-10-07 09:38:04.039957] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:09.235 [2024-10-07 09:38:04.039965] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:09.235 [2024-10-07 09:38:04.039972] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:09.235 [2024-10-07 09:38:04.039980] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:09.235 [2024-10-07 09:38:04.039988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:09.235 [2024-10-07 09:38:04.040001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:09.235 [2024-10-07 09:38:04.040017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:09.235 [2024-10-07 09:38:04.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:09.235 [2024-10-07 09:38:04.047927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.235 [2024-10-07 09:38:04.047942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.235 [2024-10-07 09:38:04.047954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.235 [2024-10-07 09:38:04.047966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.236 [2024-10-07 09:38:04.047975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:09.236 [2024-10-07 09:38:04.047992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:09.236 [2024-10-07 09:38:04.048008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.055900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.055918] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:09.495 [2024-10-07 09:38:04.055928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.055939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.055958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.055974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.063902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.063977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.063993] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.064007] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:09.495 [2024-10-07 09:38:04.064016] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:09.495 [2024-10-07 09:38:04.064022] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.495 [2024-10-07 09:38:04.064032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.071935] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:09.495 [2024-10-07 09:38:04.071952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.071967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.071979] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.495 [2024-10-07 09:38:04.071988] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.495 [2024-10-07 09:38:04.071994] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.495 [2024-10-07 09:38:04.072003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.079917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.079946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.079962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.079975] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.495 [2024-10-07 09:38:04.079984] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.495 [2024-10-07 09:38:04.079990] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.495 [2024-10-07 09:38:04.079999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.087904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.087925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087942] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087970] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.087995] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:09.495 [2024-10-07 09:38:04.088002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:09.495 [2024-10-07 09:38:04.088011] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:09.495 [2024-10-07 09:38:04.088036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.095901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.095928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.103903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.103927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.111901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.111926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.495 [2024-10-07 09:38:04.119904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:09.495 [2024-10-07 09:38:04.119936] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:09.495 [2024-10-07 09:38:04.119947] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:09.495 [2024-10-07 09:38:04.119953] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:09.495 [2024-10-07 09:38:04.119959] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:09.495 [2024-10-07 09:38:04.119964] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:09.495 [2024-10-07 09:38:04.119974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:09.495 [2024-10-07 09:38:04.119986] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:09.495 [2024-10-07 09:38:04.119994] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:09.495 [2024-10-07 09:38:04.120000] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.496 [2024-10-07 09:38:04.120009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:09.496 [2024-10-07 09:38:04.120020] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:09.496 [2024-10-07 09:38:04.120031] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.496 [2024-10-07 09:38:04.120037] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.496 [2024-10-07 09:38:04.120046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.496 [2024-10-07 09:38:04.120059] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:09.496 [2024-10-07 09:38:04.120067] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:09.496 [2024-10-07 09:38:04.120073] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.496 [2024-10-07 09:38:04.120081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:09.496 [2024-10-07 09:38:04.127901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:09.496 [2024-10-07 09:38:04.127929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:09.496 [2024-10-07 09:38:04.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:09.496 [2024-10-07 09:38:04.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:09.496 ===================================================== 00:17:09.496 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:09.496 ===================================================== 00:17:09.496 Controller Capabilities/Features 00:17:09.496 ================================ 00:17:09.496 Vendor ID: 4e58 00:17:09.496 Subsystem Vendor ID: 4e58 00:17:09.496 Serial Number: SPDK2 00:17:09.496 Model Number: SPDK bdev Controller 00:17:09.496 Firmware Version: 25.01 00:17:09.496 Recommended Arb Burst: 6 00:17:09.496 IEEE OUI Identifier: 8d 6b 50 00:17:09.496 Multi-path I/O 00:17:09.496 May have multiple subsystem ports: Yes 00:17:09.496 May have multiple controllers: Yes 00:17:09.496 Associated with SR-IOV VF: No 00:17:09.496 Max Data Transfer Size: 131072 00:17:09.496 Max Number of Namespaces: 32 00:17:09.496 Max Number of I/O Queues: 127 00:17:09.496 NVMe Specification Version (VS): 1.3 00:17:09.496 NVMe Specification Version (Identify): 1.3 00:17:09.496 Maximum Queue Entries: 256 00:17:09.496 Contiguous Queues Required: Yes 00:17:09.496 Arbitration Mechanisms Supported 00:17:09.496 Weighted Round Robin: Not Supported 00:17:09.496 Vendor Specific: Not Supported 00:17:09.496 Reset Timeout: 15000 ms 00:17:09.496 Doorbell Stride: 4 bytes 00:17:09.496 NVM Subsystem Reset: Not Supported 00:17:09.496 Command Sets Supported 00:17:09.496 NVM Command Set: Supported 00:17:09.496 Boot Partition: Not Supported 00:17:09.496 Memory Page Size Minimum: 4096 bytes 00:17:09.496 Memory Page Size Maximum: 4096 bytes 00:17:09.496 Persistent Memory Region: Not Supported 00:17:09.496 Optional Asynchronous Events Supported 00:17:09.496 Namespace Attribute Notices: Supported 00:17:09.496 Firmware Activation Notices: Not Supported 00:17:09.496 ANA Change Notices: Not Supported 00:17:09.496 PLE Aggregate Log Change Notices: Not Supported 00:17:09.496 LBA Status Info Alert Notices: Not Supported 00:17:09.496 EGE Aggregate Log Change Notices: Not Supported 00:17:09.496 Normal NVM Subsystem Shutdown event: Not Supported 00:17:09.496 Zone Descriptor Change Notices: Not Supported 00:17:09.496 Discovery Log Change Notices: Not Supported 00:17:09.496 Controller Attributes 00:17:09.496 128-bit Host Identifier: Supported 00:17:09.496 Non-Operational Permissive Mode: Not Supported 00:17:09.496 NVM Sets: Not Supported 00:17:09.496 Read Recovery Levels: Not Supported 00:17:09.496 Endurance Groups: Not Supported 00:17:09.496 Predictable Latency Mode: Not Supported 00:17:09.496 Traffic Based Keep ALive: Not Supported 00:17:09.496 Namespace Granularity: Not Supported 00:17:09.496 SQ Associations: Not Supported 00:17:09.496 UUID List: Not Supported 00:17:09.496 Multi-Domain Subsystem: Not Supported 00:17:09.496 Fixed Capacity Management: Not Supported 00:17:09.496 Variable Capacity Management: Not Supported 00:17:09.496 Delete Endurance Group: Not Supported 00:17:09.496 Delete NVM Set: Not Supported 00:17:09.496 Extended LBA Formats Supported: Not Supported 00:17:09.496 Flexible Data Placement Supported: Not Supported 00:17:09.496 00:17:09.496 Controller Memory Buffer Support 00:17:09.496 ================================ 00:17:09.496 Supported: No 00:17:09.496 00:17:09.496 Persistent Memory Region Support 00:17:09.496 ================================ 00:17:09.496 Supported: No 00:17:09.496 00:17:09.496 Admin Command Set Attributes 00:17:09.496 ============================ 00:17:09.496 Security Send/Receive: Not Supported 00:17:09.496 Format NVM: Not Supported 00:17:09.496 Firmware Activate/Download: Not Supported 00:17:09.496 Namespace Management: Not Supported 00:17:09.496 Device Self-Test: Not Supported 00:17:09.496 Directives: Not Supported 00:17:09.496 NVMe-MI: Not Supported 00:17:09.496 Virtualization Management: Not Supported 00:17:09.496 Doorbell Buffer Config: Not Supported 00:17:09.496 Get LBA Status Capability: Not Supported 00:17:09.496 Command & Feature Lockdown Capability: Not Supported 00:17:09.496 Abort Command Limit: 4 00:17:09.496 Async Event Request Limit: 4 00:17:09.496 Number of Firmware Slots: N/A 00:17:09.496 Firmware Slot 1 Read-Only: N/A 00:17:09.496 Firmware Activation Without Reset: N/A 00:17:09.496 Multiple Update Detection Support: N/A 00:17:09.496 Firmware Update Granularity: No Information Provided 00:17:09.496 Per-Namespace SMART Log: No 00:17:09.496 Asymmetric Namespace Access Log Page: Not Supported 00:17:09.496 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:09.496 Command Effects Log Page: Supported 00:17:09.496 Get Log Page Extended Data: Supported 00:17:09.496 Telemetry Log Pages: Not Supported 00:17:09.496 Persistent Event Log Pages: Not Supported 00:17:09.496 Supported Log Pages Log Page: May Support 00:17:09.496 Commands Supported & Effects Log Page: Not Supported 00:17:09.496 Feature Identifiers & Effects Log Page:May Support 00:17:09.496 NVMe-MI Commands & Effects Log Page: May Support 00:17:09.496 Data Area 4 for Telemetry Log: Not Supported 00:17:09.496 Error Log Page Entries Supported: 128 00:17:09.496 Keep Alive: Supported 00:17:09.496 Keep Alive Granularity: 10000 ms 00:17:09.496 00:17:09.496 NVM Command Set Attributes 00:17:09.496 ========================== 00:17:09.496 Submission Queue Entry Size 00:17:09.496 Max: 64 00:17:09.496 Min: 64 00:17:09.496 Completion Queue Entry Size 00:17:09.496 Max: 16 00:17:09.496 Min: 16 00:17:09.496 Number of Namespaces: 32 00:17:09.496 Compare Command: Supported 00:17:09.496 Write Uncorrectable Command: Not Supported 00:17:09.496 Dataset Management Command: Supported 00:17:09.496 Write Zeroes Command: Supported 00:17:09.496 Set Features Save Field: Not Supported 00:17:09.496 Reservations: Not Supported 00:17:09.496 Timestamp: Not Supported 00:17:09.496 Copy: Supported 00:17:09.496 Volatile Write Cache: Present 00:17:09.496 Atomic Write Unit (Normal): 1 00:17:09.496 Atomic Write Unit (PFail): 1 00:17:09.496 Atomic Compare & Write Unit: 1 00:17:09.496 Fused Compare & Write: Supported 00:17:09.496 Scatter-Gather List 00:17:09.496 SGL Command Set: Supported (Dword aligned) 00:17:09.496 SGL Keyed: Not Supported 00:17:09.496 SGL Bit Bucket Descriptor: Not Supported 00:17:09.496 SGL Metadata Pointer: Not Supported 00:17:09.496 Oversized SGL: Not Supported 00:17:09.496 SGL Metadata Address: Not Supported 00:17:09.496 SGL Offset: Not Supported 00:17:09.496 Transport SGL Data Block: Not Supported 00:17:09.496 Replay Protected Memory Block: Not Supported 00:17:09.496 00:17:09.496 Firmware Slot Information 00:17:09.496 ========================= 00:17:09.496 Active slot: 1 00:17:09.496 Slot 1 Firmware Revision: 25.01 00:17:09.496 00:17:09.496 00:17:09.496 Commands Supported and Effects 00:17:09.496 ============================== 00:17:09.496 Admin Commands 00:17:09.496 -------------- 00:17:09.496 Get Log Page (02h): Supported 00:17:09.496 Identify (06h): Supported 00:17:09.496 Abort (08h): Supported 00:17:09.496 Set Features (09h): Supported 00:17:09.496 Get Features (0Ah): Supported 00:17:09.496 Asynchronous Event Request (0Ch): Supported 00:17:09.496 Keep Alive (18h): Supported 00:17:09.496 I/O Commands 00:17:09.496 ------------ 00:17:09.496 Flush (00h): Supported LBA-Change 00:17:09.496 Write (01h): Supported LBA-Change 00:17:09.496 Read (02h): Supported 00:17:09.497 Compare (05h): Supported 00:17:09.497 Write Zeroes (08h): Supported LBA-Change 00:17:09.497 Dataset Management (09h): Supported LBA-Change 00:17:09.497 Copy (19h): Supported LBA-Change 00:17:09.497 00:17:09.497 Error Log 00:17:09.497 ========= 00:17:09.497 00:17:09.497 Arbitration 00:17:09.497 =========== 00:17:09.497 Arbitration Burst: 1 00:17:09.497 00:17:09.497 Power Management 00:17:09.497 ================ 00:17:09.497 Number of Power States: 1 00:17:09.497 Current Power State: Power State #0 00:17:09.497 Power State #0: 00:17:09.497 Max Power: 0.00 W 00:17:09.497 Non-Operational State: Operational 00:17:09.497 Entry Latency: Not Reported 00:17:09.497 Exit Latency: Not Reported 00:17:09.497 Relative Read Throughput: 0 00:17:09.497 Relative Read Latency: 0 00:17:09.497 Relative Write Throughput: 0 00:17:09.497 Relative Write Latency: 0 00:17:09.497 Idle Power: Not Reported 00:17:09.497 Active Power: Not Reported 00:17:09.497 Non-Operational Permissive Mode: Not Supported 00:17:09.497 00:17:09.497 Health Information 00:17:09.497 ================== 00:17:09.497 Critical Warnings: 00:17:09.497 Available Spare Space: OK 00:17:09.497 Temperature: OK 00:17:09.497 Device Reliability: OK 00:17:09.497 Read Only: No 00:17:09.497 Volatile Memory Backup: OK 00:17:09.497 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:09.497 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:09.497 Available Spare: 0% 00:17:09.497 Available Sp[2024-10-07 09:38:04.128083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:09.497 [2024-10-07 09:38:04.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:09.497 [2024-10-07 09:38:04.135968] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:09.497 [2024-10-07 09:38:04.135986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.497 [2024-10-07 09:38:04.135997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.497 [2024-10-07 09:38:04.136006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.497 [2024-10-07 09:38:04.136016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.497 [2024-10-07 09:38:04.136102] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:09.497 [2024-10-07 09:38:04.136124] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:09.497 [2024-10-07 09:38:04.137104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:09.497 [2024-10-07 09:38:04.137194] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:09.497 [2024-10-07 09:38:04.137226] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:09.497 [2024-10-07 09:38:04.138115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:09.497 [2024-10-07 09:38:04.138141] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:09.497 [2024-10-07 09:38:04.138220] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:09.497 [2024-10-07 09:38:04.139453] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:09.497 are Threshold: 0% 00:17:09.497 Life Percentage Used: 0% 00:17:09.497 Data Units Read: 0 00:17:09.497 Data Units Written: 0 00:17:09.497 Host Read Commands: 0 00:17:09.497 Host Write Commands: 0 00:17:09.497 Controller Busy Time: 0 minutes 00:17:09.497 Power Cycles: 0 00:17:09.497 Power On Hours: 0 hours 00:17:09.497 Unsafe Shutdowns: 0 00:17:09.497 Unrecoverable Media Errors: 0 00:17:09.497 Lifetime Error Log Entries: 0 00:17:09.497 Warning Temperature Time: 0 minutes 00:17:09.497 Critical Temperature Time: 0 minutes 00:17:09.497 00:17:09.497 Number of Queues 00:17:09.497 ================ 00:17:09.497 Number of I/O Submission Queues: 127 00:17:09.497 Number of I/O Completion Queues: 127 00:17:09.497 00:17:09.497 Active Namespaces 00:17:09.497 ================= 00:17:09.497 Namespace ID:1 00:17:09.497 Error Recovery Timeout: Unlimited 00:17:09.497 Command Set Identifier: NVM (00h) 00:17:09.497 Deallocate: Supported 00:17:09.497 Deallocated/Unwritten Error: Not Supported 00:17:09.497 Deallocated Read Value: Unknown 00:17:09.497 Deallocate in Write Zeroes: Not Supported 00:17:09.497 Deallocated Guard Field: 0xFFFF 00:17:09.497 Flush: Supported 00:17:09.497 Reservation: Supported 00:17:09.497 Namespace Sharing Capabilities: Multiple Controllers 00:17:09.497 Size (in LBAs): 131072 (0GiB) 00:17:09.497 Capacity (in LBAs): 131072 (0GiB) 00:17:09.497 Utilization (in LBAs): 131072 (0GiB) 00:17:09.497 NGUID: C29DF19E502B49228F4A0983AB7CB8D8 00:17:09.497 UUID: c29df19e-502b-4922-8f4a-0983ab7cb8d8 00:17:09.497 Thin Provisioning: Not Supported 00:17:09.497 Per-NS Atomic Units: Yes 00:17:09.497 Atomic Boundary Size (Normal): 0 00:17:09.497 Atomic Boundary Size (PFail): 0 00:17:09.497 Atomic Boundary Offset: 0 00:17:09.497 Maximum Single Source Range Length: 65535 00:17:09.497 Maximum Copy Length: 65535 00:17:09.497 Maximum Source Range Count: 1 00:17:09.497 NGUID/EUI64 Never Reused: No 00:17:09.497 Namespace Write Protected: No 00:17:09.497 Number of LBA Formats: 1 00:17:09.497 Current LBA Format: LBA Format #00 00:17:09.497 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:09.497 00:17:09.497 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:09.756 [2024-10-07 09:38:04.428953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:15.159 Initializing NVMe Controllers 00:17:15.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:15.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:15.159 Initialization complete. Launching workers. 00:17:15.159 ======================================================== 00:17:15.159 Latency(us) 00:17:15.159 Device Information : IOPS MiB/s Average min max 00:17:15.159 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33084.19 129.24 3869.30 1180.70 7619.88 00:17:15.159 ======================================================== 00:17:15.159 Total : 33084.19 129.24 3869.30 1180.70 7619.88 00:17:15.159 00:17:15.159 [2024-10-07 09:38:09.538282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:15.159 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:15.159 [2024-10-07 09:38:09.847132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:20.431 Initializing NVMe Controllers 00:17:20.431 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:20.431 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:20.431 Initialization complete. Launching workers. 00:17:20.431 ======================================================== 00:17:20.431 Latency(us) 00:17:20.431 Device Information : IOPS MiB/s Average min max 00:17:20.431 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31116.82 121.55 4112.98 1205.32 8369.55 00:17:20.431 ======================================================== 00:17:20.431 Total : 31116.82 121.55 4112.98 1205.32 8369.55 00:17:20.431 00:17:20.431 [2024-10-07 09:38:14.871557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:20.432 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:20.432 [2024-10-07 09:38:15.121737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.702 [2024-10-07 09:38:20.268040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.702 Initializing NVMe Controllers 00:17:25.702 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.702 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:25.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:25.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:25.702 Initialization complete. Launching workers. 00:17:25.702 Starting thread on core 2 00:17:25.702 Starting thread on core 3 00:17:25.702 Starting thread on core 1 00:17:25.702 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:25.961 [2024-10-07 09:38:20.592377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:29.248 [2024-10-07 09:38:23.657199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:29.248 Initializing NVMe Controllers 00:17:29.248 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.248 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.248 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:29.248 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:29.248 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:29.248 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:29.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:29.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:29.248 Initialization complete. Launching workers. 00:17:29.248 Starting thread on core 1 with urgent priority queue 00:17:29.248 Starting thread on core 2 with urgent priority queue 00:17:29.248 Starting thread on core 3 with urgent priority queue 00:17:29.248 Starting thread on core 0 with urgent priority queue 00:17:29.248 SPDK bdev Controller (SPDK2 ) core 0: 3390.67 IO/s 29.49 secs/100000 ios 00:17:29.248 SPDK bdev Controller (SPDK2 ) core 1: 3084.67 IO/s 32.42 secs/100000 ios 00:17:29.248 SPDK bdev Controller (SPDK2 ) core 2: 3415.67 IO/s 29.28 secs/100000 ios 00:17:29.248 SPDK bdev Controller (SPDK2 ) core 3: 3621.00 IO/s 27.62 secs/100000 ios 00:17:29.248 ======================================================== 00:17:29.248 00:17:29.248 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:29.248 [2024-10-07 09:38:24.053430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:29.248 Initializing NVMe Controllers 00:17:29.248 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.248 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.248 Namespace ID: 1 size: 0GB 00:17:29.248 Initialization complete. 00:17:29.248 INFO: using host memory buffer for IO 00:17:29.248 Hello world! 00:17:29.248 [2024-10-07 09:38:24.062618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:29.507 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:29.768 [2024-10-07 09:38:24.407783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:30.704 Initializing NVMe Controllers 00:17:30.704 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.704 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.704 Initialization complete. Launching workers. 00:17:30.704 submit (in ns) avg, min, max = 7363.5, 3514.4, 4016055.6 00:17:30.704 complete (in ns) avg, min, max = 26488.0, 2058.9, 4017202.2 00:17:30.704 00:17:30.704 Submit histogram 00:17:30.704 ================ 00:17:30.704 Range in us Cumulative Count 00:17:30.704 3.508 - 3.532: 0.5518% ( 71) 00:17:30.704 3.532 - 3.556: 1.5700% ( 131) 00:17:30.704 3.556 - 3.579: 4.9744% ( 438) 00:17:30.704 3.579 - 3.603: 10.2518% ( 679) 00:17:30.704 3.603 - 3.627: 19.8430% ( 1234) 00:17:30.704 3.627 - 3.650: 29.6363% ( 1260) 00:17:30.704 3.650 - 3.674: 39.4062% ( 1257) 00:17:30.704 3.674 - 3.698: 45.9817% ( 846) 00:17:30.704 3.698 - 3.721: 52.4328% ( 830) 00:17:30.704 3.721 - 3.745: 56.8475% ( 568) 00:17:30.704 3.745 - 3.769: 61.0602% ( 542) 00:17:30.704 3.769 - 3.793: 64.3634% ( 425) 00:17:30.704 3.793 - 3.816: 67.3170% ( 380) 00:17:30.704 3.816 - 3.840: 70.7213% ( 438) 00:17:30.704 3.840 - 3.864: 74.8873% ( 536) 00:17:30.704 3.864 - 3.887: 79.0067% ( 530) 00:17:30.704 3.887 - 3.911: 82.6597% ( 470) 00:17:30.704 3.911 - 3.935: 85.5899% ( 377) 00:17:30.704 3.935 - 3.959: 87.4942% ( 245) 00:17:30.704 3.959 - 3.982: 89.2196% ( 222) 00:17:30.704 3.982 - 4.006: 90.5721% ( 174) 00:17:30.704 4.006 - 4.030: 91.6291% ( 136) 00:17:30.704 4.030 - 4.053: 92.7328% ( 142) 00:17:30.704 4.053 - 4.077: 93.4556% ( 93) 00:17:30.705 4.077 - 4.101: 94.2251% ( 99) 00:17:30.705 4.101 - 4.124: 94.9479% ( 93) 00:17:30.705 4.124 - 4.148: 95.6164% ( 86) 00:17:30.705 4.148 - 4.172: 95.9661% ( 45) 00:17:30.705 4.172 - 4.196: 96.2381% ( 35) 00:17:30.705 4.196 - 4.219: 96.5024% ( 34) 00:17:30.705 4.219 - 4.243: 96.6423% ( 18) 00:17:30.705 4.243 - 4.267: 96.7589% ( 15) 00:17:30.705 4.267 - 4.290: 96.8211% ( 8) 00:17:30.705 4.290 - 4.314: 96.9688% ( 19) 00:17:30.705 4.314 - 4.338: 97.0776% ( 14) 00:17:30.705 4.338 - 4.361: 97.1164% ( 5) 00:17:30.705 4.361 - 4.385: 97.1475% ( 4) 00:17:30.705 4.385 - 4.409: 97.2175% ( 9) 00:17:30.705 4.409 - 4.433: 97.2641% ( 6) 00:17:30.705 4.433 - 4.456: 97.3030% ( 5) 00:17:30.705 4.456 - 4.480: 97.3185% ( 2) 00:17:30.705 4.480 - 4.504: 97.3496% ( 4) 00:17:30.705 4.504 - 4.527: 97.3729% ( 3) 00:17:30.705 4.527 - 4.551: 97.3962% ( 3) 00:17:30.705 4.575 - 4.599: 97.4040% ( 1) 00:17:30.705 4.622 - 4.646: 97.4273% ( 3) 00:17:30.705 4.693 - 4.717: 97.4429% ( 2) 00:17:30.705 4.717 - 4.741: 97.4584% ( 2) 00:17:30.705 4.741 - 4.764: 97.4817% ( 3) 00:17:30.705 4.764 - 4.788: 97.5284% ( 6) 00:17:30.705 4.788 - 4.812: 97.5439% ( 2) 00:17:30.705 4.812 - 4.836: 97.5517% ( 1) 00:17:30.705 4.836 - 4.859: 97.6139% ( 8) 00:17:30.705 4.859 - 4.883: 97.6760% ( 8) 00:17:30.705 4.883 - 4.907: 97.7382% ( 8) 00:17:30.705 4.907 - 4.930: 97.7926% ( 7) 00:17:30.705 4.930 - 4.954: 97.8315% ( 5) 00:17:30.705 4.954 - 4.978: 97.8781% ( 6) 00:17:30.705 4.978 - 5.001: 97.9092% ( 4) 00:17:30.705 5.001 - 5.025: 97.9636% ( 7) 00:17:30.705 5.025 - 5.049: 97.9869% ( 3) 00:17:30.705 5.049 - 5.073: 98.0025% ( 2) 00:17:30.705 5.073 - 5.096: 98.0258% ( 3) 00:17:30.705 5.096 - 5.120: 98.0958% ( 9) 00:17:30.705 5.120 - 5.144: 98.1268% ( 4) 00:17:30.705 5.144 - 5.167: 98.1579% ( 4) 00:17:30.705 5.167 - 5.191: 98.1890% ( 4) 00:17:30.705 5.191 - 5.215: 98.2123% ( 3) 00:17:30.705 5.215 - 5.239: 98.2512% ( 5) 00:17:30.705 5.239 - 5.262: 98.2590% ( 1) 00:17:30.705 5.262 - 5.286: 98.2745% ( 2) 00:17:30.705 5.286 - 5.310: 98.3056% ( 4) 00:17:30.705 5.357 - 5.381: 98.3134% ( 1) 00:17:30.705 5.428 - 5.452: 98.3289% ( 2) 00:17:30.705 5.476 - 5.499: 98.3445% ( 2) 00:17:30.705 5.570 - 5.594: 98.3522% ( 1) 00:17:30.705 5.665 - 5.689: 98.3600% ( 1) 00:17:30.705 5.713 - 5.736: 98.3756% ( 2) 00:17:30.705 5.784 - 5.807: 98.3833% ( 1) 00:17:30.705 6.305 - 6.353: 98.3911% ( 1) 00:17:30.705 6.921 - 6.969: 98.4067% ( 2) 00:17:30.705 6.969 - 7.016: 98.4222% ( 2) 00:17:30.705 7.111 - 7.159: 98.4300% ( 1) 00:17:30.705 7.159 - 7.206: 98.4377% ( 1) 00:17:30.705 7.348 - 7.396: 98.4455% ( 1) 00:17:30.705 7.396 - 7.443: 98.4533% ( 1) 00:17:30.705 7.443 - 7.490: 98.4611% ( 1) 00:17:30.705 7.490 - 7.538: 98.4688% ( 1) 00:17:30.705 7.633 - 7.680: 98.4766% ( 1) 00:17:30.705 7.775 - 7.822: 98.4844% ( 1) 00:17:30.705 7.822 - 7.870: 98.4921% ( 1) 00:17:30.705 7.917 - 7.964: 98.4999% ( 1) 00:17:30.705 7.964 - 8.012: 98.5232% ( 3) 00:17:30.705 8.012 - 8.059: 98.5310% ( 1) 00:17:30.705 8.059 - 8.107: 98.5466% ( 2) 00:17:30.705 8.107 - 8.154: 98.5543% ( 1) 00:17:30.705 8.154 - 8.201: 98.5621% ( 1) 00:17:30.705 8.296 - 8.344: 98.5854% ( 3) 00:17:30.705 8.439 - 8.486: 98.6010% ( 2) 00:17:30.705 8.486 - 8.533: 98.6165% ( 2) 00:17:30.705 8.913 - 8.960: 98.6243% ( 1) 00:17:30.705 8.960 - 9.007: 98.6321% ( 1) 00:17:30.705 9.007 - 9.055: 98.6476% ( 2) 00:17:30.705 9.055 - 9.102: 98.6554% ( 1) 00:17:30.705 9.387 - 9.434: 98.6709% ( 2) 00:17:30.705 9.434 - 9.481: 98.6787% ( 1) 00:17:30.705 9.576 - 9.624: 98.6865% ( 1) 00:17:30.705 9.624 - 9.671: 98.6942% ( 1) 00:17:30.705 9.813 - 9.861: 98.7020% ( 1) 00:17:30.705 9.908 - 9.956: 98.7098% ( 1) 00:17:30.705 9.956 - 10.003: 98.7253% ( 2) 00:17:30.705 10.003 - 10.050: 98.7331% ( 1) 00:17:30.705 10.050 - 10.098: 98.7409% ( 1) 00:17:30.705 10.145 - 10.193: 98.7486% ( 1) 00:17:30.705 10.240 - 10.287: 98.7564% ( 1) 00:17:30.705 10.335 - 10.382: 98.7642% ( 1) 00:17:30.705 10.430 - 10.477: 98.7720% ( 1) 00:17:30.705 10.524 - 10.572: 98.7797% ( 1) 00:17:30.705 10.572 - 10.619: 98.7875% ( 1) 00:17:30.705 10.999 - 11.046: 98.7953% ( 1) 00:17:30.705 11.093 - 11.141: 98.8030% ( 1) 00:17:30.705 11.141 - 11.188: 98.8108% ( 1) 00:17:30.705 11.188 - 11.236: 98.8186% ( 1) 00:17:30.705 11.283 - 11.330: 98.8264% ( 1) 00:17:30.705 11.330 - 11.378: 98.8341% ( 1) 00:17:30.705 11.710 - 11.757: 98.8419% ( 1) 00:17:30.705 12.610 - 12.705: 98.8497% ( 1) 00:17:30.705 12.800 - 12.895: 98.8575% ( 1) 00:17:30.705 12.895 - 12.990: 98.8652% ( 1) 00:17:30.705 13.274 - 13.369: 98.8730% ( 1) 00:17:30.705 14.033 - 14.127: 98.8808% ( 1) 00:17:30.705 14.222 - 14.317: 98.8885% ( 1) 00:17:30.705 14.317 - 14.412: 98.8963% ( 1) 00:17:30.705 14.412 - 14.507: 98.9041% ( 1) 00:17:30.705 14.696 - 14.791: 98.9119% ( 1) 00:17:30.705 16.972 - 17.067: 98.9196% ( 1) 00:17:30.705 17.067 - 17.161: 98.9274% ( 1) 00:17:30.705 17.256 - 17.351: 98.9352% ( 1) 00:17:30.705 17.446 - 17.541: 98.9430% ( 1) 00:17:30.705 17.541 - 17.636: 99.0129% ( 9) 00:17:30.705 17.636 - 17.730: 99.0829% ( 9) 00:17:30.705 17.730 - 17.825: 99.1217% ( 5) 00:17:30.705 17.825 - 17.920: 99.1684% ( 6) 00:17:30.705 17.920 - 18.015: 99.2305% ( 8) 00:17:30.705 18.015 - 18.110: 99.3238% ( 12) 00:17:30.705 18.110 - 18.204: 99.4093% ( 11) 00:17:30.705 18.204 - 18.299: 99.4637% ( 7) 00:17:30.705 18.299 - 18.394: 99.4948% ( 4) 00:17:30.705 18.394 - 18.489: 99.5647% ( 9) 00:17:30.705 18.489 - 18.584: 99.6502% ( 11) 00:17:30.705 18.584 - 18.679: 99.6736% ( 3) 00:17:30.705 18.679 - 18.773: 99.7124% ( 5) 00:17:30.705 18.773 - 18.868: 99.7435% ( 4) 00:17:30.705 18.868 - 18.963: 99.7668% ( 3) 00:17:30.705 18.963 - 19.058: 99.7824% ( 2) 00:17:30.705 19.058 - 19.153: 99.8212% ( 5) 00:17:30.705 19.342 - 19.437: 99.8290% ( 1) 00:17:30.705 19.437 - 19.532: 99.8368% ( 1) 00:17:30.705 19.532 - 19.627: 99.8523% ( 2) 00:17:30.705 19.627 - 19.721: 99.8601% ( 1) 00:17:30.705 19.911 - 20.006: 99.8679% ( 1) 00:17:30.705 20.290 - 20.385: 99.8834% ( 2) 00:17:30.705 20.954 - 21.049: 99.8990% ( 2) 00:17:30.705 23.135 - 23.230: 99.9067% ( 1) 00:17:30.705 24.462 - 24.652: 99.9145% ( 1) 00:17:30.705 3980.705 - 4004.978: 99.9922% ( 10) 00:17:30.705 4004.978 - 4029.250: 100.0000% ( 1) 00:17:30.705 00:17:30.705 Complete histogram 00:17:30.705 ================== 00:17:30.705 Range in us Cumulative Count 00:17:30.705 2.050 - 2.062: 0.1477% ( 19) 00:17:30.705 2.062 - 2.074: 19.9829% ( 2552) 00:17:30.705 2.074 - 2.086: 45.6941% ( 3308) 00:17:30.705 2.086 - 2.098: 47.3418% ( 212) 00:17:30.705 2.098 - 2.110: 52.9069% ( 716) 00:17:30.705 2.110 - 2.121: 58.1144% ( 670) 00:17:30.705 2.121 - 2.133: 61.1301% ( 388) 00:17:30.705 2.133 - 2.145: 72.0270% ( 1402) 00:17:30.705 2.145 - 2.157: 77.2190% ( 668) 00:17:30.705 2.157 - 2.169: 78.0507% ( 107) 00:17:30.705 2.169 - 2.181: 80.4757% ( 312) 00:17:30.705 2.181 - 2.193: 81.8825% ( 181) 00:17:30.705 2.193 - 2.204: 82.8696% ( 127) 00:17:30.705 2.204 - 2.216: 86.7014% ( 493) 00:17:30.705 2.216 - 2.228: 89.0720% ( 305) 00:17:30.705 2.228 - 2.240: 90.8907% ( 234) 00:17:30.705 2.240 - 2.252: 92.5618% ( 215) 00:17:30.705 2.252 - 2.264: 93.4401% ( 113) 00:17:30.705 2.264 - 2.276: 93.7121% ( 35) 00:17:30.705 2.276 - 2.287: 94.0930% ( 49) 00:17:30.705 2.287 - 2.299: 94.4583% ( 47) 00:17:30.705 2.299 - 2.311: 95.0490% ( 76) 00:17:30.705 2.311 - 2.323: 95.4842% ( 56) 00:17:30.705 2.323 - 2.335: 95.5464% ( 8) 00:17:30.705 2.335 - 2.347: 95.5775% ( 4) 00:17:30.705 2.347 - 2.359: 95.6319% ( 7) 00:17:30.705 2.359 - 2.370: 95.8262% ( 25) 00:17:30.705 2.370 - 2.382: 95.9739% ( 19) 00:17:30.705 2.382 - 2.394: 96.3081% ( 43) 00:17:30.705 2.394 - 2.406: 96.5879% ( 36) 00:17:30.705 2.406 - 2.418: 96.7278% ( 18) 00:17:30.705 2.418 - 2.430: 96.9221% ( 25) 00:17:30.705 2.430 - 2.441: 97.1009% ( 23) 00:17:30.705 2.441 - 2.453: 97.2641% ( 21) 00:17:30.705 2.453 - 2.465: 97.4895% ( 29) 00:17:30.705 2.465 - 2.477: 97.6605% ( 22) 00:17:30.705 2.477 - 2.489: 97.8393% ( 23) 00:17:30.705 2.489 - 2.501: 97.9559% ( 15) 00:17:30.705 2.501 - 2.513: 98.0958% ( 18) 00:17:30.705 2.513 - 2.524: 98.1502% ( 7) 00:17:30.705 2.524 - 2.536: 98.2046% ( 7) 00:17:30.705 2.536 - 2.548: 98.2434% ( 5) 00:17:30.705 2.548 - 2.560: 98.2745% ( 4) 00:17:30.705 2.560 - 2.572: 98.2978% ( 3) 00:17:30.706 2.572 - 2.584: 98.3212% ( 3) 00:17:30.706 2.584 - 2.596: 98.3445% ( 3) 00:17:30.706 2.607 - 2.619: 98.3522% ( 1) 00:17:30.706 2.631 - 2.643: 98.3756% ( 3) 00:17:30.706 2.655 - 2.667: 98.3833% ( 1) 00:17:30.706 2.702 - 2.714: 98.3911% ( 1) 00:17:30.706 2.880 - 2.892: 98.3989% ( 1) 00:17:30.706 3.295 - 3.319: 98.4067% ( 1) 00:17:30.706 3.603 - 3.627: 98.4144% ( 1) 00:17:30.706 3.627 - 3.650: 98.4300% ( 2) 00:17:30.706 3.650 - 3.674: 98.4455% ( 2) 00:17:30.706 3.674 - 3.698: 98.4611% ( 2) 00:17:30.706 3.698 - 3.721: 98.4688% ( 1) 00:17:30.706 3.721 - 3.745: 98.4921% ( 3) 00:17:30.706 3.769 - 3.793: 98.4999% ( 1) 00:17:30.706 3.793 - 3.816: 98.5077% ( 1) 00:17:30.706 3.840 - 3.864: 98.5155% ( 1) 00:17:30.706 3.864 - 3.887: 98.5310% ( 2) 00:17:30.706 3.959 - 3.982: 98.5388% ( 1) 00:17:30.706 4.101 - 4.124: 98.5466% ( 1) 00:17:30.706 4.124 - 4.148: 98.5543% ( 1) 00:17:30.706 4.148 - 4.172: 98.5621% ( 1) 00:17:30.706 4.219 - 4.243: 98.5699% ( 1) 00:17:30.706 4.243 - 4.267: 98.5776% ( 1) 00:17:30.706 4.267 - 4.290: 98.5854% ( 1) 00:17:30.706 4.338 - 4.361: 98.5932% ( 1) 00:17:30.706 4.409 - 4.433: 98.6010% ( 1) 00:17:30.706 4.433 - 4.456: 98.6087% ( 1) 00:17:30.706 5.381 - 5.404: 98.6165% ( 1) 00:17:30.706 5.452 - 5.476: 98.6243% ( 1) 00:17:30.706 5.831 - 5.855: 98.6321% ( 1) 00:17:30.706 6.068 - 6.116: 98.6398% ( 1) 00:17:30.706 6.258 - 6.305: 98.6554% ( 2) 00:17:30.706 6.400 - 6.447: 98.6631% ( 1) 00:17:30.706 6.495 - 6.542: 98.6787% ( 2) 00:17:30.706 6.590 - 6.637: 98.6865% ( 1) 00:17:30.706 6.827 - 6.874: 98.6942% ( 1) 00:17:30.706 7.111 - 7.159: 98.7098% ( 2) 00:17:30.706 7.396 - 7.443: 98.7176% ( 1) 00:17:30.706 7.633 - 7.680: 98.7253% ( 1) 00:17:30.706 7.727 - 7.775: 9[2024-10-07 09:38:25.504642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:30.965 8.7409% ( 2) 00:17:30.965 8.012 - 8.059: 98.7486% ( 1) 00:17:30.965 8.201 - 8.249: 98.7564% ( 1) 00:17:30.965 9.576 - 9.624: 98.7642% ( 1) 00:17:30.965 15.455 - 15.550: 98.7720% ( 1) 00:17:30.965 15.644 - 15.739: 98.7875% ( 2) 00:17:30.965 15.739 - 15.834: 98.8264% ( 5) 00:17:30.965 15.834 - 15.929: 98.8341% ( 1) 00:17:30.965 15.929 - 16.024: 98.8575% ( 3) 00:17:30.965 16.024 - 16.119: 98.8808% ( 3) 00:17:30.965 16.119 - 16.213: 98.9119% ( 4) 00:17:30.965 16.213 - 16.308: 98.9585% ( 6) 00:17:30.965 16.308 - 16.403: 99.0129% ( 7) 00:17:30.965 16.403 - 16.498: 99.0518% ( 5) 00:17:30.965 16.498 - 16.593: 99.0751% ( 3) 00:17:30.965 16.593 - 16.687: 99.1062% ( 4) 00:17:30.965 16.687 - 16.782: 99.1217% ( 2) 00:17:30.965 16.782 - 16.877: 99.1606% ( 5) 00:17:30.965 16.877 - 16.972: 99.2228% ( 8) 00:17:30.965 16.972 - 17.067: 99.2772% ( 7) 00:17:30.965 17.067 - 17.161: 99.2849% ( 1) 00:17:30.965 17.161 - 17.256: 99.2927% ( 1) 00:17:30.965 17.351 - 17.446: 99.3083% ( 2) 00:17:30.965 17.730 - 17.825: 99.3238% ( 2) 00:17:30.965 17.825 - 17.920: 99.3316% ( 1) 00:17:30.965 17.920 - 18.015: 99.3471% ( 2) 00:17:30.965 18.015 - 18.110: 99.3627% ( 2) 00:17:30.965 18.110 - 18.204: 99.3704% ( 1) 00:17:30.965 18.204 - 18.299: 99.3782% ( 1) 00:17:30.965 18.299 - 18.394: 99.3860% ( 1) 00:17:30.965 18.489 - 18.584: 99.3938% ( 1) 00:17:30.965 3980.705 - 4004.978: 99.8834% ( 63) 00:17:30.965 4004.978 - 4029.250: 100.0000% ( 15) 00:17:30.965 00:17:30.965 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:30.965 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:30.965 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:30.965 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:30.965 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:31.225 [ 00:17:31.225 { 00:17:31.225 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:31.225 "subtype": "Discovery", 00:17:31.225 "listen_addresses": [], 00:17:31.225 "allow_any_host": true, 00:17:31.225 "hosts": [] 00:17:31.225 }, 00:17:31.225 { 00:17:31.225 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:31.225 "subtype": "NVMe", 00:17:31.225 "listen_addresses": [ 00:17:31.225 { 00:17:31.225 "trtype": "VFIOUSER", 00:17:31.225 "adrfam": "IPv4", 00:17:31.225 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:31.225 "trsvcid": "0" 00:17:31.225 } 00:17:31.225 ], 00:17:31.226 "allow_any_host": true, 00:17:31.226 "hosts": [], 00:17:31.226 "serial_number": "SPDK1", 00:17:31.226 "model_number": "SPDK bdev Controller", 00:17:31.226 "max_namespaces": 32, 00:17:31.226 "min_cntlid": 1, 00:17:31.226 "max_cntlid": 65519, 00:17:31.226 "namespaces": [ 00:17:31.226 { 00:17:31.226 "nsid": 1, 00:17:31.226 "bdev_name": "Malloc1", 00:17:31.226 "name": "Malloc1", 00:17:31.226 "nguid": "FD69EAF08C4C4127B26F885DAA302255", 00:17:31.226 "uuid": "fd69eaf0-8c4c-4127-b26f-885daa302255" 00:17:31.226 }, 00:17:31.226 { 00:17:31.226 "nsid": 2, 00:17:31.226 "bdev_name": "Malloc3", 00:17:31.226 "name": "Malloc3", 00:17:31.226 "nguid": "C0410748D76A4E2D8E7384DE135A7604", 00:17:31.226 "uuid": "c0410748-d76a-4e2d-8e73-84de135a7604" 00:17:31.226 } 00:17:31.226 ] 00:17:31.226 }, 00:17:31.226 { 00:17:31.226 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:31.226 "subtype": "NVMe", 00:17:31.226 "listen_addresses": [ 00:17:31.226 { 00:17:31.226 "trtype": "VFIOUSER", 00:17:31.226 "adrfam": "IPv4", 00:17:31.226 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:31.226 "trsvcid": "0" 00:17:31.226 } 00:17:31.226 ], 00:17:31.226 "allow_any_host": true, 00:17:31.226 "hosts": [], 00:17:31.226 "serial_number": "SPDK2", 00:17:31.226 "model_number": "SPDK bdev Controller", 00:17:31.226 "max_namespaces": 32, 00:17:31.226 "min_cntlid": 1, 00:17:31.226 "max_cntlid": 65519, 00:17:31.226 "namespaces": [ 00:17:31.226 { 00:17:31.226 "nsid": 1, 00:17:31.226 "bdev_name": "Malloc2", 00:17:31.226 "name": "Malloc2", 00:17:31.226 "nguid": "C29DF19E502B49228F4A0983AB7CB8D8", 00:17:31.226 "uuid": "c29df19e-502b-4922-8f4a-0983ab7cb8d8" 00:17:31.226 } 00:17:31.226 ] 00:17:31.226 } 00:17:31.226 ] 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1519525 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:31.226 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:31.485 [2024-10-07 09:38:26.087297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.743 Malloc4 00:17:31.743 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:32.001 [2024-10-07 09:38:26.774419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:32.001 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:32.259 Asynchronous Event Request test 00:17:32.259 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:32.259 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:32.259 Registering asynchronous event callbacks... 00:17:32.259 Starting namespace attribute notice tests for all controllers... 00:17:32.259 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:32.259 aer_cb - Changed Namespace 00:17:32.259 Cleaning up... 00:17:32.519 [ 00:17:32.519 { 00:17:32.519 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:32.519 "subtype": "Discovery", 00:17:32.519 "listen_addresses": [], 00:17:32.519 "allow_any_host": true, 00:17:32.519 "hosts": [] 00:17:32.519 }, 00:17:32.519 { 00:17:32.519 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:32.519 "subtype": "NVMe", 00:17:32.519 "listen_addresses": [ 00:17:32.519 { 00:17:32.519 "trtype": "VFIOUSER", 00:17:32.519 "adrfam": "IPv4", 00:17:32.519 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:32.519 "trsvcid": "0" 00:17:32.519 } 00:17:32.519 ], 00:17:32.519 "allow_any_host": true, 00:17:32.519 "hosts": [], 00:17:32.519 "serial_number": "SPDK1", 00:17:32.519 "model_number": "SPDK bdev Controller", 00:17:32.519 "max_namespaces": 32, 00:17:32.519 "min_cntlid": 1, 00:17:32.519 "max_cntlid": 65519, 00:17:32.519 "namespaces": [ 00:17:32.519 { 00:17:32.519 "nsid": 1, 00:17:32.519 "bdev_name": "Malloc1", 00:17:32.519 "name": "Malloc1", 00:17:32.519 "nguid": "FD69EAF08C4C4127B26F885DAA302255", 00:17:32.519 "uuid": "fd69eaf0-8c4c-4127-b26f-885daa302255" 00:17:32.519 }, 00:17:32.519 { 00:17:32.519 "nsid": 2, 00:17:32.519 "bdev_name": "Malloc3", 00:17:32.519 "name": "Malloc3", 00:17:32.519 "nguid": "C0410748D76A4E2D8E7384DE135A7604", 00:17:32.519 "uuid": "c0410748-d76a-4e2d-8e73-84de135a7604" 00:17:32.519 } 00:17:32.519 ] 00:17:32.519 }, 00:17:32.519 { 00:17:32.519 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:32.519 "subtype": "NVMe", 00:17:32.519 "listen_addresses": [ 00:17:32.519 { 00:17:32.519 "trtype": "VFIOUSER", 00:17:32.519 "adrfam": "IPv4", 00:17:32.519 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:32.519 "trsvcid": "0" 00:17:32.519 } 00:17:32.519 ], 00:17:32.519 "allow_any_host": true, 00:17:32.519 "hosts": [], 00:17:32.519 "serial_number": "SPDK2", 00:17:32.519 "model_number": "SPDK bdev Controller", 00:17:32.519 "max_namespaces": 32, 00:17:32.519 "min_cntlid": 1, 00:17:32.519 "max_cntlid": 65519, 00:17:32.519 "namespaces": [ 00:17:32.519 { 00:17:32.519 "nsid": 1, 00:17:32.519 "bdev_name": "Malloc2", 00:17:32.519 "name": "Malloc2", 00:17:32.519 "nguid": "C29DF19E502B49228F4A0983AB7CB8D8", 00:17:32.519 "uuid": "c29df19e-502b-4922-8f4a-0983ab7cb8d8" 00:17:32.519 }, 00:17:32.519 { 00:17:32.519 "nsid": 2, 00:17:32.519 "bdev_name": "Malloc4", 00:17:32.519 "name": "Malloc4", 00:17:32.519 "nguid": "0BD5265B7E874CE8B9F11F8D8DFF9E8C", 00:17:32.519 "uuid": "0bd5265b-7e87-4ce8-b9f1-1f8d8dff9e8c" 00:17:32.519 } 00:17:32.519 ] 00:17:32.519 } 00:17:32.519 ] 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1519525 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1513568 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1513568 ']' 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1513568 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1513568 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1513568' 00:17:32.520 killing process with pid 1513568 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1513568 00:17:32.520 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1513568 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1519675 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1519675' 00:17:33.089 Process pid: 1519675 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1519675 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1519675 ']' 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.089 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:33.089 [2024-10-07 09:38:27.728765] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:33.089 [2024-10-07 09:38:27.730310] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:17:33.089 [2024-10-07 09:38:27.730388] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.089 [2024-10-07 09:38:27.814448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.348 [2024-10-07 09:38:27.928070] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.348 [2024-10-07 09:38:27.928124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.348 [2024-10-07 09:38:27.928155] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.348 [2024-10-07 09:38:27.928167] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.348 [2024-10-07 09:38:27.928178] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.348 [2024-10-07 09:38:27.929992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.348 [2024-10-07 09:38:27.930068] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.348 [2024-10-07 09:38:27.930015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.348 [2024-10-07 09:38:27.930071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.348 [2024-10-07 09:38:28.033256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:33.348 [2024-10-07 09:38:28.033474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:33.348 [2024-10-07 09:38:28.033756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:33.348 [2024-10-07 09:38:28.034358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:33.348 [2024-10-07 09:38:28.034599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:34.288 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.288 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:34.288 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:35.670 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:35.929 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:35.929 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:35.929 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:35.929 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:35.929 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:36.497 Malloc1 00:17:36.497 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:37.065 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:37.633 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:38.200 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:38.200 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:38.200 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:38.769 Malloc2 00:17:39.028 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:39.286 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:39.545 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1519675 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1519675 ']' 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1519675 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.116 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1519675 00:17:40.373 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.373 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.373 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1519675' 00:17:40.373 killing process with pid 1519675 00:17:40.373 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1519675 00:17:40.373 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1519675 00:17:40.631 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:40.631 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:40.631 00:17:40.632 real 1m1.227s 00:17:40.632 user 3m53.247s 00:17:40.632 sys 0m5.470s 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:40.632 ************************************ 00:17:40.632 END TEST nvmf_vfio_user 00:17:40.632 ************************************ 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.632 ************************************ 00:17:40.632 START TEST nvmf_vfio_user_nvme_compliance 00:17:40.632 ************************************ 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:40.632 * Looking for test storage... 00:17:40.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:17:40.632 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.890 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.891 --rc genhtml_branch_coverage=1 00:17:40.891 --rc genhtml_function_coverage=1 00:17:40.891 --rc genhtml_legend=1 00:17:40.891 --rc geninfo_all_blocks=1 00:17:40.891 --rc geninfo_unexecuted_blocks=1 00:17:40.891 00:17:40.891 ' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.891 --rc genhtml_branch_coverage=1 00:17:40.891 --rc genhtml_function_coverage=1 00:17:40.891 --rc genhtml_legend=1 00:17:40.891 --rc geninfo_all_blocks=1 00:17:40.891 --rc geninfo_unexecuted_blocks=1 00:17:40.891 00:17:40.891 ' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.891 --rc genhtml_branch_coverage=1 00:17:40.891 --rc genhtml_function_coverage=1 00:17:40.891 --rc genhtml_legend=1 00:17:40.891 --rc geninfo_all_blocks=1 00:17:40.891 --rc geninfo_unexecuted_blocks=1 00:17:40.891 00:17:40.891 ' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.891 --rc genhtml_branch_coverage=1 00:17:40.891 --rc genhtml_function_coverage=1 00:17:40.891 --rc genhtml_legend=1 00:17:40.891 --rc geninfo_all_blocks=1 00:17:40.891 --rc geninfo_unexecuted_blocks=1 00:17:40.891 00:17:40.891 ' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:40.891 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1520693 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1520693' 00:17:40.892 Process pid: 1520693 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1520693 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1520693 ']' 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.892 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.892 [2024-10-07 09:38:35.704671] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:17:40.892 [2024-10-07 09:38:35.704826] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.151 [2024-10-07 09:38:35.789561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.151 [2024-10-07 09:38:35.912516] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.151 [2024-10-07 09:38:35.912579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.151 [2024-10-07 09:38:35.912596] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.151 [2024-10-07 09:38:35.912609] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.151 [2024-10-07 09:38:35.912629] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.151 [2024-10-07 09:38:35.913715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.151 [2024-10-07 09:38:35.913806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.151 [2024-10-07 09:38:35.913809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.409 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.409 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:41.409 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:42.343 malloc0 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.343 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:42.601 00:17:42.601 00:17:42.601 CUnit - A unit testing framework for C - Version 2.1-3 00:17:42.601 http://cunit.sourceforge.net/ 00:17:42.601 00:17:42.601 00:17:42.601 Suite: nvme_compliance 00:17:42.601 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-07 09:38:37.271681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.601 [2024-10-07 09:38:37.273214] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:42.601 [2024-10-07 09:38:37.273240] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:42.601 [2024-10-07 09:38:37.273267] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:42.601 [2024-10-07 09:38:37.274707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.601 passed 00:17:42.601 Test: admin_identify_ctrlr_verify_fused ...[2024-10-07 09:38:37.360343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.601 [2024-10-07 09:38:37.365373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.601 passed 00:17:42.859 Test: admin_identify_ns ...[2024-10-07 09:38:37.450444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.859 [2024-10-07 09:38:37.510906] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:42.859 [2024-10-07 09:38:37.518918] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:42.859 [2024-10-07 09:38:37.540031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.859 passed 00:17:42.859 Test: admin_get_features_mandatory_features ...[2024-10-07 09:38:37.622199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.859 [2024-10-07 09:38:37.625238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.859 passed 00:17:43.116 Test: admin_get_features_optional_features ...[2024-10-07 09:38:37.709785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.117 [2024-10-07 09:38:37.715814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.117 passed 00:17:43.117 Test: admin_set_features_number_of_queues ...[2024-10-07 09:38:37.797061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.117 [2024-10-07 09:38:37.901999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.374 passed 00:17:43.374 Test: admin_get_log_page_mandatory_logs ...[2024-10-07 09:38:37.988354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.374 [2024-10-07 09:38:37.991375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.374 passed 00:17:43.374 Test: admin_get_log_page_with_lpo ...[2024-10-07 09:38:38.074421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.374 [2024-10-07 09:38:38.142905] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:43.374 [2024-10-07 09:38:38.155986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.374 passed 00:17:43.632 Test: fabric_property_get ...[2024-10-07 09:38:38.235662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.632 [2024-10-07 09:38:38.236970] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:43.632 [2024-10-07 09:38:38.238684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.632 passed 00:17:43.632 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-07 09:38:38.324284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.632 [2024-10-07 09:38:38.325561] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:43.632 [2024-10-07 09:38:38.327307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.632 passed 00:17:43.632 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-07 09:38:38.410491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.890 [2024-10-07 09:38:38.493901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:43.890 [2024-10-07 09:38:38.509900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:43.890 [2024-10-07 09:38:38.515032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.890 passed 00:17:43.890 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-07 09:38:38.601205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.890 [2024-10-07 09:38:38.602484] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:43.890 [2024-10-07 09:38:38.604213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.890 passed 00:17:43.890 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-07 09:38:38.686580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:44.147 [2024-10-07 09:38:38.761901] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:44.147 [2024-10-07 09:38:38.785915] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:44.147 [2024-10-07 09:38:38.791012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:44.147 passed 00:17:44.147 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-07 09:38:38.874607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:44.147 [2024-10-07 09:38:38.875898] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:44.147 [2024-10-07 09:38:38.875951] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:44.147 [2024-10-07 09:38:38.877629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:44.147 passed 00:17:44.147 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-07 09:38:38.959803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:44.405 [2024-10-07 09:38:39.056900] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:44.405 [2024-10-07 09:38:39.064913] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:44.405 [2024-10-07 09:38:39.072914] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:44.405 [2024-10-07 09:38:39.080913] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:44.405 [2024-10-07 09:38:39.109023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:44.405 passed 00:17:44.405 Test: admin_create_io_sq_verify_pc ...[2024-10-07 09:38:39.192686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:44.405 [2024-10-07 09:38:39.208930] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:44.663 [2024-10-07 09:38:39.226087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:44.663 passed 00:17:44.663 Test: admin_create_io_qp_max_qps ...[2024-10-07 09:38:39.308651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:45.599 [2024-10-07 09:38:40.399907] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:46.165 [2024-10-07 09:38:40.784247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.165 passed 00:17:46.165 Test: admin_create_io_sq_shared_cq ...[2024-10-07 09:38:40.867581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:46.423 [2024-10-07 09:38:40.999900] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:46.423 [2024-10-07 09:38:41.036983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:46.423 passed 00:17:46.423 00:17:46.423 Run Summary: Type Total Ran Passed Failed Inactive 00:17:46.423 suites 1 1 n/a 0 0 00:17:46.423 tests 18 18 18 0 0 00:17:46.423 asserts 360 360 360 0 n/a 00:17:46.423 00:17:46.423 Elapsed time = 1.561 seconds 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1520693 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1520693 ']' 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1520693 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520693 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520693' 00:17:46.423 killing process with pid 1520693 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1520693 00:17:46.423 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1520693 00:17:46.681 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:46.681 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:46.681 00:17:46.681 real 0m6.122s 00:17:46.681 user 0m16.688s 00:17:46.681 sys 0m0.654s 00:17:46.681 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.681 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:46.681 ************************************ 00:17:46.681 END TEST nvmf_vfio_user_nvme_compliance 00:17:46.681 ************************************ 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:46.941 ************************************ 00:17:46.941 START TEST nvmf_vfio_user_fuzz 00:17:46.941 ************************************ 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:46.941 * Looking for test storage... 00:17:46.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:46.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.941 --rc genhtml_branch_coverage=1 00:17:46.941 --rc genhtml_function_coverage=1 00:17:46.941 --rc genhtml_legend=1 00:17:46.941 --rc geninfo_all_blocks=1 00:17:46.941 --rc geninfo_unexecuted_blocks=1 00:17:46.941 00:17:46.941 ' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:46.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.941 --rc genhtml_branch_coverage=1 00:17:46.941 --rc genhtml_function_coverage=1 00:17:46.941 --rc genhtml_legend=1 00:17:46.941 --rc geninfo_all_blocks=1 00:17:46.941 --rc geninfo_unexecuted_blocks=1 00:17:46.941 00:17:46.941 ' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:46.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.941 --rc genhtml_branch_coverage=1 00:17:46.941 --rc genhtml_function_coverage=1 00:17:46.941 --rc genhtml_legend=1 00:17:46.941 --rc geninfo_all_blocks=1 00:17:46.941 --rc geninfo_unexecuted_blocks=1 00:17:46.941 00:17:46.941 ' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:46.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.941 --rc genhtml_branch_coverage=1 00:17:46.941 --rc genhtml_function_coverage=1 00:17:46.941 --rc genhtml_legend=1 00:17:46.941 --rc geninfo_all_blocks=1 00:17:46.941 --rc geninfo_unexecuted_blocks=1 00:17:46.941 00:17:46.941 ' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.941 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1521435 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1521435' 00:17:46.942 Process pid: 1521435 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1521435 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1521435 ']' 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.942 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:47.509 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.509 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:47.509 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.444 malloc0 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:48.444 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:20.610 Fuzzing completed. Shutting down the fuzz application 00:18:20.610 00:18:20.610 Dumping successful admin opcodes: 00:18:20.610 8, 9, 10, 24, 00:18:20.610 Dumping successful io opcodes: 00:18:20.610 0, 00:18:20.610 NS: 0x200003a1ef00 I/O qp, Total commands completed: 578073, total successful commands: 2223, random_seed: 1328152192 00:18:20.610 NS: 0x200003a1ef00 admin qp, Total commands completed: 112230, total successful commands: 920, random_seed: 3957967616 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1521435 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1521435 ']' 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1521435 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1521435 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1521435' 00:18:20.610 killing process with pid 1521435 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1521435 00:18:20.610 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1521435 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:20.610 00:18:20.610 real 0m32.588s 00:18:20.610 user 0m33.144s 00:18:20.610 sys 0m26.170s 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.610 ************************************ 00:18:20.610 END TEST nvmf_vfio_user_fuzz 00:18:20.610 ************************************ 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:20.610 09:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.611 ************************************ 00:18:20.611 START TEST nvmf_auth_target 00:18:20.611 ************************************ 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.611 * Looking for test storage... 00:18:20.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.611 --rc genhtml_branch_coverage=1 00:18:20.611 --rc genhtml_function_coverage=1 00:18:20.611 --rc genhtml_legend=1 00:18:20.611 --rc geninfo_all_blocks=1 00:18:20.611 --rc geninfo_unexecuted_blocks=1 00:18:20.611 00:18:20.611 ' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.611 --rc genhtml_branch_coverage=1 00:18:20.611 --rc genhtml_function_coverage=1 00:18:20.611 --rc genhtml_legend=1 00:18:20.611 --rc geninfo_all_blocks=1 00:18:20.611 --rc geninfo_unexecuted_blocks=1 00:18:20.611 00:18:20.611 ' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.611 --rc genhtml_branch_coverage=1 00:18:20.611 --rc genhtml_function_coverage=1 00:18:20.611 --rc genhtml_legend=1 00:18:20.611 --rc geninfo_all_blocks=1 00:18:20.611 --rc geninfo_unexecuted_blocks=1 00:18:20.611 00:18:20.611 ' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:20.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.611 --rc genhtml_branch_coverage=1 00:18:20.611 --rc genhtml_function_coverage=1 00:18:20.611 --rc genhtml_legend=1 00:18:20.611 --rc geninfo_all_blocks=1 00:18:20.611 --rc geninfo_unexecuted_blocks=1 00:18:20.611 00:18:20.611 ' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.611 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:20.612 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:21.989 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:21.989 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:21.989 Found net devices under 0000:84:00.0: cvl_0_0 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:21.989 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:21.990 Found net devices under 0000:84:00.1: cvl_0_1 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.990 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:22.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:18:22.248 00:18:22.248 --- 10.0.0.2 ping statistics --- 00:18:22.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.248 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:22.248 00:18:22.248 --- 10.0.0.1 ping statistics --- 00:18:22.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.248 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:22.248 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1527489 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1527489 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1527489 ']' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.249 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1527524 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bee909180696e625d0cc329abec9143820867ccf7c0622c7 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.nmi 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bee909180696e625d0cc329abec9143820867ccf7c0622c7 0 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bee909180696e625d0cc329abec9143820867ccf7c0622c7 0 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bee909180696e625d0cc329abec9143820867ccf7c0622c7 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.nmi 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.nmi 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nmi 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=be2677f78442d973524c3c7ee8e8aeafaf976f72426cbdf691186eddd97e34d3 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.rju 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key be2677f78442d973524c3c7ee8e8aeafaf976f72426cbdf691186eddd97e34d3 3 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 be2677f78442d973524c3c7ee8e8aeafaf976f72426cbdf691186eddd97e34d3 3 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=be2677f78442d973524c3c7ee8e8aeafaf976f72426cbdf691186eddd97e34d3 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.rju 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.rju 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.rju 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e40de2bbfdf56708454eb1e8f5c32eae 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.NMT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e40de2bbfdf56708454eb1e8f5c32eae 1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e40de2bbfdf56708454eb1e8f5c32eae 1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e40de2bbfdf56708454eb1e8f5c32eae 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.NMT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.NMT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.NMT 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d2d2885a1dd9b4465db9df5dd90737293508761cd224e999 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ZLJ 00:18:22.816 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d2d2885a1dd9b4465db9df5dd90737293508761cd224e999 2 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d2d2885a1dd9b4465db9df5dd90737293508761cd224e999 2 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d2d2885a1dd9b4465db9df5dd90737293508761cd224e999 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:22.817 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ZLJ 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ZLJ 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ZLJ 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0154e1ad79b54e161b4f5eb69c01ddeab473899fd79bd31e 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.cub 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0154e1ad79b54e161b4f5eb69c01ddeab473899fd79bd31e 2 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0154e1ad79b54e161b4f5eb69c01ddeab473899fd79bd31e 2 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0154e1ad79b54e161b4f5eb69c01ddeab473899fd79bd31e 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.cub 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.cub 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.cub 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f80bb7417cb96170becc0da592682202 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.1Yz 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f80bb7417cb96170becc0da592682202 1 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f80bb7417cb96170becc0da592682202 1 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:23.075 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f80bb7417cb96170becc0da592682202 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.1Yz 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.1Yz 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1Yz 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8308802d5421161c44c26e9c28ae6f21906160d0059a102ddb83b787f6fd0de3 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.pEI 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8308802d5421161c44c26e9c28ae6f21906160d0059a102ddb83b787f6fd0de3 3 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8308802d5421161c44c26e9c28ae6f21906160d0059a102ddb83b787f6fd0de3 3 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8308802d5421161c44c26e9c28ae6f21906160d0059a102ddb83b787f6fd0de3 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:23.076 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.pEI 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.pEI 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.pEI 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1527489 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1527489 ']' 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.334 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1527524 /var/tmp/host.sock 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1527524 ']' 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.592 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:23.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:23.593 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.593 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nmi 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nmi 00:18:24.157 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nmi 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.rju ]] 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rju 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rju 00:18:24.723 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rju 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NMT 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NMT 00:18:24.980 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NMT 00:18:25.238 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ZLJ ]] 00:18:25.238 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZLJ 00:18:25.238 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.238 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.238 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.238 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZLJ 00:18:25.238 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZLJ 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cub 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.804 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cub 00:18:25.805 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cub 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1Yz ]] 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Yz 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Yz 00:18:26.064 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Yz 00:18:26.631 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:26.631 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pEI 00:18:26.632 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.632 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.632 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.632 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pEI 00:18:26.632 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pEI 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:27.198 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.457 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.717 00:18:27.976 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.976 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.976 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.235 { 00:18:28.235 "cntlid": 1, 00:18:28.235 "qid": 0, 00:18:28.235 "state": "enabled", 00:18:28.235 "thread": "nvmf_tgt_poll_group_000", 00:18:28.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:28.235 "listen_address": { 00:18:28.235 "trtype": "TCP", 00:18:28.235 "adrfam": "IPv4", 00:18:28.235 "traddr": "10.0.0.2", 00:18:28.235 "trsvcid": "4420" 00:18:28.235 }, 00:18:28.235 "peer_address": { 00:18:28.235 "trtype": "TCP", 00:18:28.235 "adrfam": "IPv4", 00:18:28.235 "traddr": "10.0.0.1", 00:18:28.235 "trsvcid": "57588" 00:18:28.235 }, 00:18:28.235 "auth": { 00:18:28.235 "state": "completed", 00:18:28.235 "digest": "sha256", 00:18:28.235 "dhgroup": "null" 00:18:28.235 } 00:18:28.235 } 00:18:28.235 ]' 00:18:28.235 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.235 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.235 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.494 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.494 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.494 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.494 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.494 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.752 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:28.752 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.127 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.128 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.128 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.694 00:18:30.694 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.694 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.694 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.261 { 00:18:31.261 "cntlid": 3, 00:18:31.261 "qid": 0, 00:18:31.261 "state": "enabled", 00:18:31.261 "thread": "nvmf_tgt_poll_group_000", 00:18:31.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:31.261 "listen_address": { 00:18:31.261 "trtype": "TCP", 00:18:31.261 "adrfam": "IPv4", 00:18:31.261 "traddr": "10.0.0.2", 00:18:31.261 "trsvcid": "4420" 00:18:31.261 }, 00:18:31.261 "peer_address": { 00:18:31.261 "trtype": "TCP", 00:18:31.261 "adrfam": "IPv4", 00:18:31.261 "traddr": "10.0.0.1", 00:18:31.261 "trsvcid": "57610" 00:18:31.261 }, 00:18:31.261 "auth": { 00:18:31.261 "state": "completed", 00:18:31.261 "digest": "sha256", 00:18:31.261 "dhgroup": "null" 00:18:31.261 } 00:18:31.261 } 00:18:31.261 ]' 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:31.261 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.261 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.261 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.261 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.828 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:31.828 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.204 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.463 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.720 00:18:33.979 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.979 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.979 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.237 { 00:18:34.237 "cntlid": 5, 00:18:34.237 "qid": 0, 00:18:34.237 "state": "enabled", 00:18:34.237 "thread": "nvmf_tgt_poll_group_000", 00:18:34.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:34.237 "listen_address": { 00:18:34.237 "trtype": "TCP", 00:18:34.237 "adrfam": "IPv4", 00:18:34.237 "traddr": "10.0.0.2", 00:18:34.237 "trsvcid": "4420" 00:18:34.237 }, 00:18:34.237 "peer_address": { 00:18:34.237 "trtype": "TCP", 00:18:34.237 "adrfam": "IPv4", 00:18:34.237 "traddr": "10.0.0.1", 00:18:34.237 "trsvcid": "57642" 00:18:34.237 }, 00:18:34.237 "auth": { 00:18:34.237 "state": "completed", 00:18:34.237 "digest": "sha256", 00:18:34.237 "dhgroup": "null" 00:18:34.237 } 00:18:34.237 } 00:18:34.237 ]' 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:34.237 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.237 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.237 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.237 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.803 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:34.803 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.306 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.565 00:18:36.565 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.565 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.565 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.823 { 00:18:36.823 "cntlid": 7, 00:18:36.823 "qid": 0, 00:18:36.823 "state": "enabled", 00:18:36.823 "thread": "nvmf_tgt_poll_group_000", 00:18:36.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:36.823 "listen_address": { 00:18:36.823 "trtype": "TCP", 00:18:36.823 "adrfam": "IPv4", 00:18:36.823 "traddr": "10.0.0.2", 00:18:36.823 "trsvcid": "4420" 00:18:36.823 }, 00:18:36.823 "peer_address": { 00:18:36.823 "trtype": "TCP", 00:18:36.823 "adrfam": "IPv4", 00:18:36.823 "traddr": "10.0.0.1", 00:18:36.823 "trsvcid": "57666" 00:18:36.823 }, 00:18:36.823 "auth": { 00:18:36.823 "state": "completed", 00:18:36.823 "digest": "sha256", 00:18:36.823 "dhgroup": "null" 00:18:36.823 } 00:18:36.823 } 00:18:36.823 ]' 00:18:36.823 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.082 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.341 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:18:37.341 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.718 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.285 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.852 00:18:39.852 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.852 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.852 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.111 { 00:18:40.111 "cntlid": 9, 00:18:40.111 "qid": 0, 00:18:40.111 "state": "enabled", 00:18:40.111 "thread": "nvmf_tgt_poll_group_000", 00:18:40.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:40.111 "listen_address": { 00:18:40.111 "trtype": "TCP", 00:18:40.111 "adrfam": "IPv4", 00:18:40.111 "traddr": "10.0.0.2", 00:18:40.111 "trsvcid": "4420" 00:18:40.111 }, 00:18:40.111 "peer_address": { 00:18:40.111 "trtype": "TCP", 00:18:40.111 "adrfam": "IPv4", 00:18:40.111 "traddr": "10.0.0.1", 00:18:40.111 "trsvcid": "53360" 00:18:40.111 }, 00:18:40.111 "auth": { 00:18:40.111 "state": "completed", 00:18:40.111 "digest": "sha256", 00:18:40.111 "dhgroup": "ffdhe2048" 00:18:40.111 } 00:18:40.111 } 00:18:40.111 ]' 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.111 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.678 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:40.678 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.613 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.181 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.182 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.748 00:18:42.748 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.748 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.748 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.007 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.007 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.008 { 00:18:43.008 "cntlid": 11, 00:18:43.008 "qid": 0, 00:18:43.008 "state": "enabled", 00:18:43.008 "thread": "nvmf_tgt_poll_group_000", 00:18:43.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:43.008 "listen_address": { 00:18:43.008 "trtype": "TCP", 00:18:43.008 "adrfam": "IPv4", 00:18:43.008 "traddr": "10.0.0.2", 00:18:43.008 "trsvcid": "4420" 00:18:43.008 }, 00:18:43.008 "peer_address": { 00:18:43.008 "trtype": "TCP", 00:18:43.008 "adrfam": "IPv4", 00:18:43.008 "traddr": "10.0.0.1", 00:18:43.008 "trsvcid": "53392" 00:18:43.008 }, 00:18:43.008 "auth": { 00:18:43.008 "state": "completed", 00:18:43.008 "digest": "sha256", 00:18:43.008 "dhgroup": "ffdhe2048" 00:18:43.008 } 00:18:43.008 } 00:18:43.008 ]' 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.008 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.574 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:43.574 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.509 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.448 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.706 00:18:45.706 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.706 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.706 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.965 { 00:18:45.965 "cntlid": 13, 00:18:45.965 "qid": 0, 00:18:45.965 "state": "enabled", 00:18:45.965 "thread": "nvmf_tgt_poll_group_000", 00:18:45.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:45.965 "listen_address": { 00:18:45.965 "trtype": "TCP", 00:18:45.965 "adrfam": "IPv4", 00:18:45.965 "traddr": "10.0.0.2", 00:18:45.965 "trsvcid": "4420" 00:18:45.965 }, 00:18:45.965 "peer_address": { 00:18:45.965 "trtype": "TCP", 00:18:45.965 "adrfam": "IPv4", 00:18:45.965 "traddr": "10.0.0.1", 00:18:45.965 "trsvcid": "53426" 00:18:45.965 }, 00:18:45.965 "auth": { 00:18:45.965 "state": "completed", 00:18:45.965 "digest": "sha256", 00:18:45.965 "dhgroup": "ffdhe2048" 00:18:45.965 } 00:18:45.965 } 00:18:45.965 ]' 00:18:45.965 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.225 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.793 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:46.793 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.730 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.298 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.299 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.299 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.866 00:18:48.866 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.866 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.867 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.140 { 00:18:49.140 "cntlid": 15, 00:18:49.140 "qid": 0, 00:18:49.140 "state": "enabled", 00:18:49.140 "thread": "nvmf_tgt_poll_group_000", 00:18:49.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:49.140 "listen_address": { 00:18:49.140 "trtype": "TCP", 00:18:49.140 "adrfam": "IPv4", 00:18:49.140 "traddr": "10.0.0.2", 00:18:49.140 "trsvcid": "4420" 00:18:49.140 }, 00:18:49.140 "peer_address": { 00:18:49.140 "trtype": "TCP", 00:18:49.140 "adrfam": "IPv4", 00:18:49.140 "traddr": "10.0.0.1", 00:18:49.140 "trsvcid": "55846" 00:18:49.140 }, 00:18:49.140 "auth": { 00:18:49.140 "state": "completed", 00:18:49.140 "digest": "sha256", 00:18:49.140 "dhgroup": "ffdhe2048" 00:18:49.140 } 00:18:49.140 } 00:18:49.140 ]' 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.140 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.494 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.494 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.494 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.777 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:18:49.777 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.714 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.973 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.541 00:18:51.541 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.541 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.541 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.109 { 00:18:52.109 "cntlid": 17, 00:18:52.109 "qid": 0, 00:18:52.109 "state": "enabled", 00:18:52.109 "thread": "nvmf_tgt_poll_group_000", 00:18:52.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:52.109 "listen_address": { 00:18:52.109 "trtype": "TCP", 00:18:52.109 "adrfam": "IPv4", 00:18:52.109 "traddr": "10.0.0.2", 00:18:52.109 "trsvcid": "4420" 00:18:52.109 }, 00:18:52.109 "peer_address": { 00:18:52.109 "trtype": "TCP", 00:18:52.109 "adrfam": "IPv4", 00:18:52.109 "traddr": "10.0.0.1", 00:18:52.109 "trsvcid": "55890" 00:18:52.109 }, 00:18:52.109 "auth": { 00:18:52.109 "state": "completed", 00:18:52.109 "digest": "sha256", 00:18:52.109 "dhgroup": "ffdhe3072" 00:18:52.109 } 00:18:52.109 } 00:18:52.109 ]' 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.109 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.675 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:52.675 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.061 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.320 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.579 00:18:54.579 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.579 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.579 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.180 { 00:18:55.180 "cntlid": 19, 00:18:55.180 "qid": 0, 00:18:55.180 "state": "enabled", 00:18:55.180 "thread": "nvmf_tgt_poll_group_000", 00:18:55.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:55.180 "listen_address": { 00:18:55.180 "trtype": "TCP", 00:18:55.180 "adrfam": "IPv4", 00:18:55.180 "traddr": "10.0.0.2", 00:18:55.180 "trsvcid": "4420" 00:18:55.180 }, 00:18:55.180 "peer_address": { 00:18:55.180 "trtype": "TCP", 00:18:55.180 "adrfam": "IPv4", 00:18:55.180 "traddr": "10.0.0.1", 00:18:55.180 "trsvcid": "55900" 00:18:55.180 }, 00:18:55.180 "auth": { 00:18:55.180 "state": "completed", 00:18:55.180 "digest": "sha256", 00:18:55.180 "dhgroup": "ffdhe3072" 00:18:55.180 } 00:18:55.180 } 00:18:55.180 ]' 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.180 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.439 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:55.439 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.812 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.071 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.006 00:18:58.006 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.006 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.006 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.264 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.264 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.264 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.264 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.265 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.265 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.265 { 00:18:58.265 "cntlid": 21, 00:18:58.265 "qid": 0, 00:18:58.265 "state": "enabled", 00:18:58.265 "thread": "nvmf_tgt_poll_group_000", 00:18:58.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:58.265 "listen_address": { 00:18:58.265 "trtype": "TCP", 00:18:58.265 "adrfam": "IPv4", 00:18:58.265 "traddr": "10.0.0.2", 00:18:58.265 "trsvcid": "4420" 00:18:58.265 }, 00:18:58.265 "peer_address": { 00:18:58.265 "trtype": "TCP", 00:18:58.265 "adrfam": "IPv4", 00:18:58.265 "traddr": "10.0.0.1", 00:18:58.265 "trsvcid": "51592" 00:18:58.265 }, 00:18:58.265 "auth": { 00:18:58.265 "state": "completed", 00:18:58.265 "digest": "sha256", 00:18:58.265 "dhgroup": "ffdhe3072" 00:18:58.265 } 00:18:58.265 } 00:18:58.265 ]' 00:18:58.265 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.265 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.265 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.265 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.265 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.265 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.265 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.265 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.830 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:58.830 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.763 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.587 00:19:00.587 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.587 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.587 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.153 { 00:19:01.153 "cntlid": 23, 00:19:01.153 "qid": 0, 00:19:01.153 "state": "enabled", 00:19:01.153 "thread": "nvmf_tgt_poll_group_000", 00:19:01.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:01.153 "listen_address": { 00:19:01.153 "trtype": "TCP", 00:19:01.153 "adrfam": "IPv4", 00:19:01.153 "traddr": "10.0.0.2", 00:19:01.153 "trsvcid": "4420" 00:19:01.153 }, 00:19:01.153 "peer_address": { 00:19:01.153 "trtype": "TCP", 00:19:01.153 "adrfam": "IPv4", 00:19:01.153 "traddr": "10.0.0.1", 00:19:01.153 "trsvcid": "51612" 00:19:01.153 }, 00:19:01.153 "auth": { 00:19:01.153 "state": "completed", 00:19:01.153 "digest": "sha256", 00:19:01.153 "dhgroup": "ffdhe3072" 00:19:01.153 } 00:19:01.153 } 00:19:01.153 ]' 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.153 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.720 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:01.721 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.654 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.220 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.785 00:19:03.785 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.785 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.785 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.042 { 00:19:04.042 "cntlid": 25, 00:19:04.042 "qid": 0, 00:19:04.042 "state": "enabled", 00:19:04.042 "thread": "nvmf_tgt_poll_group_000", 00:19:04.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:04.042 "listen_address": { 00:19:04.042 "trtype": "TCP", 00:19:04.042 "adrfam": "IPv4", 00:19:04.042 "traddr": "10.0.0.2", 00:19:04.042 "trsvcid": "4420" 00:19:04.042 }, 00:19:04.042 "peer_address": { 00:19:04.042 "trtype": "TCP", 00:19:04.042 "adrfam": "IPv4", 00:19:04.042 "traddr": "10.0.0.1", 00:19:04.042 "trsvcid": "51646" 00:19:04.042 }, 00:19:04.042 "auth": { 00:19:04.042 "state": "completed", 00:19:04.042 "digest": "sha256", 00:19:04.042 "dhgroup": "ffdhe4096" 00:19:04.042 } 00:19:04.042 } 00:19:04.042 ]' 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.042 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.300 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.300 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.300 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.558 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:04.558 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.491 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.056 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.057 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.057 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.622 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.622 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.878 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.878 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.878 { 00:19:06.878 "cntlid": 27, 00:19:06.878 "qid": 0, 00:19:06.878 "state": "enabled", 00:19:06.878 "thread": "nvmf_tgt_poll_group_000", 00:19:06.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:06.878 "listen_address": { 00:19:06.878 "trtype": "TCP", 00:19:06.879 "adrfam": "IPv4", 00:19:06.879 "traddr": "10.0.0.2", 00:19:06.879 "trsvcid": "4420" 00:19:06.879 }, 00:19:06.879 "peer_address": { 00:19:06.879 "trtype": "TCP", 00:19:06.879 "adrfam": "IPv4", 00:19:06.879 "traddr": "10.0.0.1", 00:19:06.879 "trsvcid": "51672" 00:19:06.879 }, 00:19:06.879 "auth": { 00:19:06.879 "state": "completed", 00:19:06.879 "digest": "sha256", 00:19:06.879 "dhgroup": "ffdhe4096" 00:19:06.879 } 00:19:06.879 } 00:19:06.879 ]' 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.879 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.443 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:07.443 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.374 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.939 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.503 00:19:09.503 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.503 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.503 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.069 { 00:19:10.069 "cntlid": 29, 00:19:10.069 "qid": 0, 00:19:10.069 "state": "enabled", 00:19:10.069 "thread": "nvmf_tgt_poll_group_000", 00:19:10.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:10.069 "listen_address": { 00:19:10.069 "trtype": "TCP", 00:19:10.069 "adrfam": "IPv4", 00:19:10.069 "traddr": "10.0.0.2", 00:19:10.069 "trsvcid": "4420" 00:19:10.069 }, 00:19:10.069 "peer_address": { 00:19:10.069 "trtype": "TCP", 00:19:10.069 "adrfam": "IPv4", 00:19:10.069 "traddr": "10.0.0.1", 00:19:10.069 "trsvcid": "55762" 00:19:10.069 }, 00:19:10.069 "auth": { 00:19:10.069 "state": "completed", 00:19:10.069 "digest": "sha256", 00:19:10.069 "dhgroup": "ffdhe4096" 00:19:10.069 } 00:19:10.069 } 00:19:10.069 ]' 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.069 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.635 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:10.635 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.569 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.135 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.701 00:19:12.701 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.701 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.701 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.958 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.958 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.958 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.959 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.959 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.959 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.959 { 00:19:12.959 "cntlid": 31, 00:19:12.959 "qid": 0, 00:19:12.959 "state": "enabled", 00:19:12.959 "thread": "nvmf_tgt_poll_group_000", 00:19:12.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:12.959 "listen_address": { 00:19:12.959 "trtype": "TCP", 00:19:12.959 "adrfam": "IPv4", 00:19:12.959 "traddr": "10.0.0.2", 00:19:12.959 "trsvcid": "4420" 00:19:12.959 }, 00:19:12.959 "peer_address": { 00:19:12.959 "trtype": "TCP", 00:19:12.959 "adrfam": "IPv4", 00:19:12.959 "traddr": "10.0.0.1", 00:19:12.959 "trsvcid": "55788" 00:19:12.959 }, 00:19:12.959 "auth": { 00:19:12.959 "state": "completed", 00:19:12.959 "digest": "sha256", 00:19:12.959 "dhgroup": "ffdhe4096" 00:19:12.959 } 00:19:12.959 } 00:19:12.959 ]' 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.217 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.782 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:13.782 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.715 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.281 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.847 00:19:15.847 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.847 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.847 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.105 { 00:19:16.105 "cntlid": 33, 00:19:16.105 "qid": 0, 00:19:16.105 "state": "enabled", 00:19:16.105 "thread": "nvmf_tgt_poll_group_000", 00:19:16.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:16.105 "listen_address": { 00:19:16.105 "trtype": "TCP", 00:19:16.105 "adrfam": "IPv4", 00:19:16.105 "traddr": "10.0.0.2", 00:19:16.105 "trsvcid": "4420" 00:19:16.105 }, 00:19:16.105 "peer_address": { 00:19:16.105 "trtype": "TCP", 00:19:16.105 "adrfam": "IPv4", 00:19:16.105 "traddr": "10.0.0.1", 00:19:16.105 "trsvcid": "55814" 00:19:16.105 }, 00:19:16.105 "auth": { 00:19:16.105 "state": "completed", 00:19:16.105 "digest": "sha256", 00:19:16.105 "dhgroup": "ffdhe6144" 00:19:16.105 } 00:19:16.105 } 00:19:16.105 ]' 00:19:16.105 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.363 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.363 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.363 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.363 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.363 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.363 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.363 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.929 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:16.930 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.331 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.331 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.706 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.706 { 00:19:19.706 "cntlid": 35, 00:19:19.706 "qid": 0, 00:19:19.706 "state": "enabled", 00:19:19.706 "thread": "nvmf_tgt_poll_group_000", 00:19:19.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:19.706 "listen_address": { 00:19:19.706 "trtype": "TCP", 00:19:19.706 "adrfam": "IPv4", 00:19:19.706 "traddr": "10.0.0.2", 00:19:19.706 "trsvcid": "4420" 00:19:19.706 }, 00:19:19.706 "peer_address": { 00:19:19.706 "trtype": "TCP", 00:19:19.706 "adrfam": "IPv4", 00:19:19.706 "traddr": "10.0.0.1", 00:19:19.706 "trsvcid": "58460" 00:19:19.706 }, 00:19:19.706 "auth": { 00:19:19.706 "state": "completed", 00:19:19.706 "digest": "sha256", 00:19:19.706 "dhgroup": "ffdhe6144" 00:19:19.706 } 00:19:19.706 } 00:19:19.706 ]' 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.706 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.707 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.963 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.963 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.963 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.963 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.963 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.528 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:20.528 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.536 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.537 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.102 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.036 00:19:23.036 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.036 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.036 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.294 { 00:19:23.294 "cntlid": 37, 00:19:23.294 "qid": 0, 00:19:23.294 "state": "enabled", 00:19:23.294 "thread": "nvmf_tgt_poll_group_000", 00:19:23.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:23.294 "listen_address": { 00:19:23.294 "trtype": "TCP", 00:19:23.294 "adrfam": "IPv4", 00:19:23.294 "traddr": "10.0.0.2", 00:19:23.294 "trsvcid": "4420" 00:19:23.294 }, 00:19:23.294 "peer_address": { 00:19:23.294 "trtype": "TCP", 00:19:23.294 "adrfam": "IPv4", 00:19:23.294 "traddr": "10.0.0.1", 00:19:23.294 "trsvcid": "58478" 00:19:23.294 }, 00:19:23.294 "auth": { 00:19:23.294 "state": "completed", 00:19:23.294 "digest": "sha256", 00:19:23.294 "dhgroup": "ffdhe6144" 00:19:23.294 } 00:19:23.294 } 00:19:23.294 ]' 00:19:23.294 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.294 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.294 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.294 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.294 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.294 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.552 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.552 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.809 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:23.809 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.740 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.306 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.307 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.243 00:19:26.243 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.243 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.243 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.501 { 00:19:26.501 "cntlid": 39, 00:19:26.501 "qid": 0, 00:19:26.501 "state": "enabled", 00:19:26.501 "thread": "nvmf_tgt_poll_group_000", 00:19:26.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:26.501 "listen_address": { 00:19:26.501 "trtype": "TCP", 00:19:26.501 "adrfam": "IPv4", 00:19:26.501 "traddr": "10.0.0.2", 00:19:26.501 "trsvcid": "4420" 00:19:26.501 }, 00:19:26.501 "peer_address": { 00:19:26.501 "trtype": "TCP", 00:19:26.501 "adrfam": "IPv4", 00:19:26.501 "traddr": "10.0.0.1", 00:19:26.501 "trsvcid": "58502" 00:19:26.501 }, 00:19:26.501 "auth": { 00:19:26.501 "state": "completed", 00:19:26.501 "digest": "sha256", 00:19:26.501 "dhgroup": "ffdhe6144" 00:19:26.501 } 00:19:26.501 } 00:19:26.501 ]' 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.501 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.758 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.758 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.758 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.758 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.758 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.016 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:27.016 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.390 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.977 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.910 00:19:29.910 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.910 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.910 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.168 { 00:19:30.168 "cntlid": 41, 00:19:30.168 "qid": 0, 00:19:30.168 "state": "enabled", 00:19:30.168 "thread": "nvmf_tgt_poll_group_000", 00:19:30.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:30.168 "listen_address": { 00:19:30.168 "trtype": "TCP", 00:19:30.168 "adrfam": "IPv4", 00:19:30.168 "traddr": "10.0.0.2", 00:19:30.168 "trsvcid": "4420" 00:19:30.168 }, 00:19:30.168 "peer_address": { 00:19:30.168 "trtype": "TCP", 00:19:30.168 "adrfam": "IPv4", 00:19:30.168 "traddr": "10.0.0.1", 00:19:30.168 "trsvcid": "43160" 00:19:30.168 }, 00:19:30.168 "auth": { 00:19:30.168 "state": "completed", 00:19:30.168 "digest": "sha256", 00:19:30.168 "dhgroup": "ffdhe8192" 00:19:30.168 } 00:19:30.168 } 00:19:30.168 ]' 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.426 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.426 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.426 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.426 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.426 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.992 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:30.992 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.927 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.185 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.558 00:19:33.558 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.558 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.558 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.815 { 00:19:33.815 "cntlid": 43, 00:19:33.815 "qid": 0, 00:19:33.815 "state": "enabled", 00:19:33.815 "thread": "nvmf_tgt_poll_group_000", 00:19:33.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:33.815 "listen_address": { 00:19:33.815 "trtype": "TCP", 00:19:33.815 "adrfam": "IPv4", 00:19:33.815 "traddr": "10.0.0.2", 00:19:33.815 "trsvcid": "4420" 00:19:33.815 }, 00:19:33.815 "peer_address": { 00:19:33.815 "trtype": "TCP", 00:19:33.815 "adrfam": "IPv4", 00:19:33.815 "traddr": "10.0.0.1", 00:19:33.815 "trsvcid": "43186" 00:19:33.815 }, 00:19:33.815 "auth": { 00:19:33.815 "state": "completed", 00:19:33.815 "digest": "sha256", 00:19:33.815 "dhgroup": "ffdhe8192" 00:19:33.815 } 00:19:33.815 } 00:19:33.815 ]' 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.815 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.073 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.073 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.073 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.073 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.073 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.331 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:34.331 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.705 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.963 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.899 00:19:36.899 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.899 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.899 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.157 { 00:19:37.157 "cntlid": 45, 00:19:37.157 "qid": 0, 00:19:37.157 "state": "enabled", 00:19:37.157 "thread": "nvmf_tgt_poll_group_000", 00:19:37.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:37.157 "listen_address": { 00:19:37.157 "trtype": "TCP", 00:19:37.157 "adrfam": "IPv4", 00:19:37.157 "traddr": "10.0.0.2", 00:19:37.157 "trsvcid": "4420" 00:19:37.157 }, 00:19:37.157 "peer_address": { 00:19:37.157 "trtype": "TCP", 00:19:37.157 "adrfam": "IPv4", 00:19:37.157 "traddr": "10.0.0.1", 00:19:37.157 "trsvcid": "43208" 00:19:37.157 }, 00:19:37.157 "auth": { 00:19:37.157 "state": "completed", 00:19:37.157 "digest": "sha256", 00:19:37.157 "dhgroup": "ffdhe8192" 00:19:37.157 } 00:19:37.157 } 00:19:37.157 ]' 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.157 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.415 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.415 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.415 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.674 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:37.674 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.048 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.306 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.240 00:19:40.240 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.240 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.240 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.806 { 00:19:40.806 "cntlid": 47, 00:19:40.806 "qid": 0, 00:19:40.806 "state": "enabled", 00:19:40.806 "thread": "nvmf_tgt_poll_group_000", 00:19:40.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:40.806 "listen_address": { 00:19:40.806 "trtype": "TCP", 00:19:40.806 "adrfam": "IPv4", 00:19:40.806 "traddr": "10.0.0.2", 00:19:40.806 "trsvcid": "4420" 00:19:40.806 }, 00:19:40.806 "peer_address": { 00:19:40.806 "trtype": "TCP", 00:19:40.806 "adrfam": "IPv4", 00:19:40.806 "traddr": "10.0.0.1", 00:19:40.806 "trsvcid": "41708" 00:19:40.806 }, 00:19:40.806 "auth": { 00:19:40.806 "state": "completed", 00:19:40.806 "digest": "sha256", 00:19:40.806 "dhgroup": "ffdhe8192" 00:19:40.806 } 00:19:40.806 } 00:19:40.806 ]' 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.806 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:41.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.439 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.697 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.263 00:19:43.263 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.263 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.263 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.522 { 00:19:43.522 "cntlid": 49, 00:19:43.522 "qid": 0, 00:19:43.522 "state": "enabled", 00:19:43.522 "thread": "nvmf_tgt_poll_group_000", 00:19:43.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:43.522 "listen_address": { 00:19:43.522 "trtype": "TCP", 00:19:43.522 "adrfam": "IPv4", 00:19:43.522 "traddr": "10.0.0.2", 00:19:43.522 "trsvcid": "4420" 00:19:43.522 }, 00:19:43.522 "peer_address": { 00:19:43.522 "trtype": "TCP", 00:19:43.522 "adrfam": "IPv4", 00:19:43.522 "traddr": "10.0.0.1", 00:19:43.522 "trsvcid": "41740" 00:19:43.522 }, 00:19:43.522 "auth": { 00:19:43.522 "state": "completed", 00:19:43.522 "digest": "sha384", 00:19:43.522 "dhgroup": "null" 00:19:43.522 } 00:19:43.522 } 00:19:43.522 ]' 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.522 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.087 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:44.087 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.461 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.461 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.028 00:19:46.028 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.028 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.028 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.591 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.591 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.591 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.592 { 00:19:46.592 "cntlid": 51, 00:19:46.592 "qid": 0, 00:19:46.592 "state": "enabled", 00:19:46.592 "thread": "nvmf_tgt_poll_group_000", 00:19:46.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:46.592 "listen_address": { 00:19:46.592 "trtype": "TCP", 00:19:46.592 "adrfam": "IPv4", 00:19:46.592 "traddr": "10.0.0.2", 00:19:46.592 "trsvcid": "4420" 00:19:46.592 }, 00:19:46.592 "peer_address": { 00:19:46.592 "trtype": "TCP", 00:19:46.592 "adrfam": "IPv4", 00:19:46.592 "traddr": "10.0.0.1", 00:19:46.592 "trsvcid": "41770" 00:19:46.592 }, 00:19:46.592 "auth": { 00:19:46.592 "state": "completed", 00:19:46.592 "digest": "sha384", 00:19:46.592 "dhgroup": "null" 00:19:46.592 } 00:19:46.592 } 00:19:46.592 ]' 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.592 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.154 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:47.154 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.085 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.651 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.651 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.651 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.584 00:19:49.584 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.584 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.584 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.150 { 00:19:50.150 "cntlid": 53, 00:19:50.150 "qid": 0, 00:19:50.150 "state": "enabled", 00:19:50.150 "thread": "nvmf_tgt_poll_group_000", 00:19:50.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:50.150 "listen_address": { 00:19:50.150 "trtype": "TCP", 00:19:50.150 "adrfam": "IPv4", 00:19:50.150 "traddr": "10.0.0.2", 00:19:50.150 "trsvcid": "4420" 00:19:50.150 }, 00:19:50.150 "peer_address": { 00:19:50.150 "trtype": "TCP", 00:19:50.150 "adrfam": "IPv4", 00:19:50.150 "traddr": "10.0.0.1", 00:19:50.150 "trsvcid": "33678" 00:19:50.150 }, 00:19:50.150 "auth": { 00:19:50.150 "state": "completed", 00:19:50.150 "digest": "sha384", 00:19:50.150 "dhgroup": "null" 00:19:50.150 } 00:19:50.150 } 00:19:50.150 ]' 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.150 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.716 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:50.716 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.087 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.345 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.603 00:19:52.603 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.603 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.603 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.585 { 00:19:53.585 "cntlid": 55, 00:19:53.585 "qid": 0, 00:19:53.585 "state": "enabled", 00:19:53.585 "thread": "nvmf_tgt_poll_group_000", 00:19:53.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:53.585 "listen_address": { 00:19:53.585 "trtype": "TCP", 00:19:53.585 "adrfam": "IPv4", 00:19:53.585 "traddr": "10.0.0.2", 00:19:53.585 "trsvcid": "4420" 00:19:53.585 }, 00:19:53.585 "peer_address": { 00:19:53.585 "trtype": "TCP", 00:19:53.585 "adrfam": "IPv4", 00:19:53.585 "traddr": "10.0.0.1", 00:19:53.585 "trsvcid": "33712" 00:19:53.585 }, 00:19:53.585 "auth": { 00:19:53.585 "state": "completed", 00:19:53.585 "digest": "sha384", 00:19:53.585 "dhgroup": "null" 00:19:53.585 } 00:19:53.585 } 00:19:53.585 ]' 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.585 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.149 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:54.149 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.521 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.778 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.035 00:19:56.292 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.292 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.292 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.857 { 00:19:56.857 "cntlid": 57, 00:19:56.857 "qid": 0, 00:19:56.857 "state": "enabled", 00:19:56.857 "thread": "nvmf_tgt_poll_group_000", 00:19:56.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:56.857 "listen_address": { 00:19:56.857 "trtype": "TCP", 00:19:56.857 "adrfam": "IPv4", 00:19:56.857 "traddr": "10.0.0.2", 00:19:56.857 "trsvcid": "4420" 00:19:56.857 }, 00:19:56.857 "peer_address": { 00:19:56.857 "trtype": "TCP", 00:19:56.857 "adrfam": "IPv4", 00:19:56.857 "traddr": "10.0.0.1", 00:19:56.857 "trsvcid": "33732" 00:19:56.857 }, 00:19:56.857 "auth": { 00:19:56.857 "state": "completed", 00:19:56.857 "digest": "sha384", 00:19:56.857 "dhgroup": "ffdhe2048" 00:19:56.857 } 00:19:56.857 } 00:19:56.857 ]' 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.857 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.423 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:57.423 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.795 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.359 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.617 00:19:59.617 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.617 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.617 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.181 { 00:20:00.181 "cntlid": 59, 00:20:00.181 "qid": 0, 00:20:00.181 "state": "enabled", 00:20:00.181 "thread": "nvmf_tgt_poll_group_000", 00:20:00.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:00.181 "listen_address": { 00:20:00.181 "trtype": "TCP", 00:20:00.181 "adrfam": "IPv4", 00:20:00.181 "traddr": "10.0.0.2", 00:20:00.181 "trsvcid": "4420" 00:20:00.181 }, 00:20:00.181 "peer_address": { 00:20:00.181 "trtype": "TCP", 00:20:00.181 "adrfam": "IPv4", 00:20:00.181 "traddr": "10.0.0.1", 00:20:00.181 "trsvcid": "51804" 00:20:00.181 }, 00:20:00.181 "auth": { 00:20:00.181 "state": "completed", 00:20:00.181 "digest": "sha384", 00:20:00.181 "dhgroup": "ffdhe2048" 00:20:00.181 } 00:20:00.181 } 00:20:00.181 ]' 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.181 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.440 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.440 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.440 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.440 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.697 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:00.697 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.072 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.330 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.895 00:20:02.895 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.895 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.896 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.153 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.153 { 00:20:03.154 "cntlid": 61, 00:20:03.154 "qid": 0, 00:20:03.154 "state": "enabled", 00:20:03.154 "thread": "nvmf_tgt_poll_group_000", 00:20:03.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:03.154 "listen_address": { 00:20:03.154 "trtype": "TCP", 00:20:03.154 "adrfam": "IPv4", 00:20:03.154 "traddr": "10.0.0.2", 00:20:03.154 "trsvcid": "4420" 00:20:03.154 }, 00:20:03.154 "peer_address": { 00:20:03.154 "trtype": "TCP", 00:20:03.154 "adrfam": "IPv4", 00:20:03.154 "traddr": "10.0.0.1", 00:20:03.154 "trsvcid": "51826" 00:20:03.154 }, 00:20:03.154 "auth": { 00:20:03.154 "state": "completed", 00:20:03.154 "digest": "sha384", 00:20:03.154 "dhgroup": "ffdhe2048" 00:20:03.154 } 00:20:03.154 } 00:20:03.154 ]' 00:20:03.154 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.154 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.154 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.154 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.154 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.411 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.411 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.411 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.976 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:03.976 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.909 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.474 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.731 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.731 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.731 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.731 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.989 00:20:05.989 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.989 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.989 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.554 { 00:20:06.554 "cntlid": 63, 00:20:06.554 "qid": 0, 00:20:06.554 "state": "enabled", 00:20:06.554 "thread": "nvmf_tgt_poll_group_000", 00:20:06.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:06.554 "listen_address": { 00:20:06.554 "trtype": "TCP", 00:20:06.554 "adrfam": "IPv4", 00:20:06.554 "traddr": "10.0.0.2", 00:20:06.554 "trsvcid": "4420" 00:20:06.554 }, 00:20:06.554 "peer_address": { 00:20:06.554 "trtype": "TCP", 00:20:06.554 "adrfam": "IPv4", 00:20:06.554 "traddr": "10.0.0.1", 00:20:06.554 "trsvcid": "51846" 00:20:06.554 }, 00:20:06.554 "auth": { 00:20:06.554 "state": "completed", 00:20:06.554 "digest": "sha384", 00:20:06.554 "dhgroup": "ffdhe2048" 00:20:06.554 } 00:20:06.554 } 00:20:06.554 ]' 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.554 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.812 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.812 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.812 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.377 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:07.377 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.311 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.876 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.135 00:20:09.135 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.135 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.135 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.701 { 00:20:09.701 "cntlid": 65, 00:20:09.701 "qid": 0, 00:20:09.701 "state": "enabled", 00:20:09.701 "thread": "nvmf_tgt_poll_group_000", 00:20:09.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:09.701 "listen_address": { 00:20:09.701 "trtype": "TCP", 00:20:09.701 "adrfam": "IPv4", 00:20:09.701 "traddr": "10.0.0.2", 00:20:09.701 "trsvcid": "4420" 00:20:09.701 }, 00:20:09.701 "peer_address": { 00:20:09.701 "trtype": "TCP", 00:20:09.701 "adrfam": "IPv4", 00:20:09.701 "traddr": "10.0.0.1", 00:20:09.701 "trsvcid": "56022" 00:20:09.701 }, 00:20:09.701 "auth": { 00:20:09.701 "state": "completed", 00:20:09.701 "digest": "sha384", 00:20:09.701 "dhgroup": "ffdhe3072" 00:20:09.701 } 00:20:09.701 } 00:20:09.701 ]' 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.701 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.958 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:09.958 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.328 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.892 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.893 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.457 00:20:12.457 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.457 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.457 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.715 { 00:20:12.715 "cntlid": 67, 00:20:12.715 "qid": 0, 00:20:12.715 "state": "enabled", 00:20:12.715 "thread": "nvmf_tgt_poll_group_000", 00:20:12.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:12.715 "listen_address": { 00:20:12.715 "trtype": "TCP", 00:20:12.715 "adrfam": "IPv4", 00:20:12.715 "traddr": "10.0.0.2", 00:20:12.715 "trsvcid": "4420" 00:20:12.715 }, 00:20:12.715 "peer_address": { 00:20:12.715 "trtype": "TCP", 00:20:12.715 "adrfam": "IPv4", 00:20:12.715 "traddr": "10.0.0.1", 00:20:12.715 "trsvcid": "56044" 00:20:12.715 }, 00:20:12.715 "auth": { 00:20:12.715 "state": "completed", 00:20:12.715 "digest": "sha384", 00:20:12.715 "dhgroup": "ffdhe3072" 00:20:12.715 } 00:20:12.715 } 00:20:12.715 ]' 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.715 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.972 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.972 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.972 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.230 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:13.230 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.603 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.535 00:20:15.535 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.535 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.535 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.793 { 00:20:15.793 "cntlid": 69, 00:20:15.793 "qid": 0, 00:20:15.793 "state": "enabled", 00:20:15.793 "thread": "nvmf_tgt_poll_group_000", 00:20:15.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:15.793 "listen_address": { 00:20:15.793 "trtype": "TCP", 00:20:15.793 "adrfam": "IPv4", 00:20:15.793 "traddr": "10.0.0.2", 00:20:15.793 "trsvcid": "4420" 00:20:15.793 }, 00:20:15.793 "peer_address": { 00:20:15.793 "trtype": "TCP", 00:20:15.793 "adrfam": "IPv4", 00:20:15.793 "traddr": "10.0.0.1", 00:20:15.793 "trsvcid": "56072" 00:20:15.793 }, 00:20:15.793 "auth": { 00:20:15.793 "state": "completed", 00:20:15.793 "digest": "sha384", 00:20:15.793 "dhgroup": "ffdhe3072" 00:20:15.793 } 00:20:15.793 } 00:20:15.793 ]' 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.793 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.050 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.050 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.050 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.050 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.050 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.614 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:16.614 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.986 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.244 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.502 00:20:18.502 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.502 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.502 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.067 { 00:20:19.067 "cntlid": 71, 00:20:19.067 "qid": 0, 00:20:19.067 "state": "enabled", 00:20:19.067 "thread": "nvmf_tgt_poll_group_000", 00:20:19.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:19.067 "listen_address": { 00:20:19.067 "trtype": "TCP", 00:20:19.067 "adrfam": "IPv4", 00:20:19.067 "traddr": "10.0.0.2", 00:20:19.067 "trsvcid": "4420" 00:20:19.067 }, 00:20:19.067 "peer_address": { 00:20:19.067 "trtype": "TCP", 00:20:19.067 "adrfam": "IPv4", 00:20:19.067 "traddr": "10.0.0.1", 00:20:19.067 "trsvcid": "32898" 00:20:19.067 }, 00:20:19.067 "auth": { 00:20:19.067 "state": "completed", 00:20:19.067 "digest": "sha384", 00:20:19.067 "dhgroup": "ffdhe3072" 00:20:19.067 } 00:20:19.067 } 00:20:19.067 ]' 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.067 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.325 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:19.325 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:20.697 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.698 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.955 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.956 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.889 00:20:21.889 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.890 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.890 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.147 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.147 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.148 { 00:20:22.148 "cntlid": 73, 00:20:22.148 "qid": 0, 00:20:22.148 "state": "enabled", 00:20:22.148 "thread": "nvmf_tgt_poll_group_000", 00:20:22.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:22.148 "listen_address": { 00:20:22.148 "trtype": "TCP", 00:20:22.148 "adrfam": "IPv4", 00:20:22.148 "traddr": "10.0.0.2", 00:20:22.148 "trsvcid": "4420" 00:20:22.148 }, 00:20:22.148 "peer_address": { 00:20:22.148 "trtype": "TCP", 00:20:22.148 "adrfam": "IPv4", 00:20:22.148 "traddr": "10.0.0.1", 00:20:22.148 "trsvcid": "32922" 00:20:22.148 }, 00:20:22.148 "auth": { 00:20:22.148 "state": "completed", 00:20:22.148 "digest": "sha384", 00:20:22.148 "dhgroup": "ffdhe4096" 00:20:22.148 } 00:20:22.148 } 00:20:22.148 ]' 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.148 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.713 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:22.713 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.087 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.016 00:20:25.016 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.016 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.016 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.354 { 00:20:25.354 "cntlid": 75, 00:20:25.354 "qid": 0, 00:20:25.354 "state": "enabled", 00:20:25.354 "thread": "nvmf_tgt_poll_group_000", 00:20:25.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:25.354 "listen_address": { 00:20:25.354 "trtype": "TCP", 00:20:25.354 "adrfam": "IPv4", 00:20:25.354 "traddr": "10.0.0.2", 00:20:25.354 "trsvcid": "4420" 00:20:25.354 }, 00:20:25.354 "peer_address": { 00:20:25.354 "trtype": "TCP", 00:20:25.354 "adrfam": "IPv4", 00:20:25.354 "traddr": "10.0.0.1", 00:20:25.354 "trsvcid": "32966" 00:20:25.354 }, 00:20:25.354 "auth": { 00:20:25.354 "state": "completed", 00:20:25.354 "digest": "sha384", 00:20:25.354 "dhgroup": "ffdhe4096" 00:20:25.354 } 00:20:25.354 } 00:20:25.354 ]' 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.354 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.653 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:25.653 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.024 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.588 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.153 00:20:28.153 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.153 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.153 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.411 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.411 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.411 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.411 { 00:20:28.411 "cntlid": 77, 00:20:28.411 "qid": 0, 00:20:28.411 "state": "enabled", 00:20:28.411 "thread": "nvmf_tgt_poll_group_000", 00:20:28.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:28.411 "listen_address": { 00:20:28.411 "trtype": "TCP", 00:20:28.411 "adrfam": "IPv4", 00:20:28.411 "traddr": "10.0.0.2", 00:20:28.411 "trsvcid": "4420" 00:20:28.411 }, 00:20:28.411 "peer_address": { 00:20:28.411 "trtype": "TCP", 00:20:28.411 "adrfam": "IPv4", 00:20:28.411 "traddr": "10.0.0.1", 00:20:28.411 "trsvcid": "43962" 00:20:28.411 }, 00:20:28.411 "auth": { 00:20:28.411 "state": "completed", 00:20:28.411 "digest": "sha384", 00:20:28.411 "dhgroup": "ffdhe4096" 00:20:28.411 } 00:20:28.411 } 00:20:28.411 ]' 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.411 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.342 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:29.342 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:30.275 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.275 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.275 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.275 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.533 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.533 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.533 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.533 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.790 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.355 00:20:31.355 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.355 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.355 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.920 { 00:20:31.920 "cntlid": 79, 00:20:31.920 "qid": 0, 00:20:31.920 "state": "enabled", 00:20:31.920 "thread": "nvmf_tgt_poll_group_000", 00:20:31.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:31.920 "listen_address": { 00:20:31.920 "trtype": "TCP", 00:20:31.920 "adrfam": "IPv4", 00:20:31.920 "traddr": "10.0.0.2", 00:20:31.920 "trsvcid": "4420" 00:20:31.920 }, 00:20:31.920 "peer_address": { 00:20:31.920 "trtype": "TCP", 00:20:31.920 "adrfam": "IPv4", 00:20:31.920 "traddr": "10.0.0.1", 00:20:31.920 "trsvcid": "43992" 00:20:31.920 }, 00:20:31.920 "auth": { 00:20:31.920 "state": "completed", 00:20:31.920 "digest": "sha384", 00:20:31.920 "dhgroup": "ffdhe4096" 00:20:31.920 } 00:20:31.920 } 00:20:31.920 ]' 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.920 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.177 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.177 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.177 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.742 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:32.742 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.673 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.238 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.803 00:20:34.803 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.803 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.803 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.368 { 00:20:35.368 "cntlid": 81, 00:20:35.368 "qid": 0, 00:20:35.368 "state": "enabled", 00:20:35.368 "thread": "nvmf_tgt_poll_group_000", 00:20:35.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:35.368 "listen_address": { 00:20:35.368 "trtype": "TCP", 00:20:35.368 "adrfam": "IPv4", 00:20:35.368 "traddr": "10.0.0.2", 00:20:35.368 "trsvcid": "4420" 00:20:35.368 }, 00:20:35.368 "peer_address": { 00:20:35.368 "trtype": "TCP", 00:20:35.368 "adrfam": "IPv4", 00:20:35.368 "traddr": "10.0.0.1", 00:20:35.368 "trsvcid": "44006" 00:20:35.368 }, 00:20:35.368 "auth": { 00:20:35.368 "state": "completed", 00:20:35.368 "digest": "sha384", 00:20:35.368 "dhgroup": "ffdhe6144" 00:20:35.368 } 00:20:35.368 } 00:20:35.368 ]' 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.368 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.625 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.625 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.625 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.882 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:35.882 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.256 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.256 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.188 00:20:38.188 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.188 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.188 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.757 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.757 { 00:20:38.757 "cntlid": 83, 00:20:38.757 "qid": 0, 00:20:38.757 "state": "enabled", 00:20:38.757 "thread": "nvmf_tgt_poll_group_000", 00:20:38.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:38.757 "listen_address": { 00:20:38.757 "trtype": "TCP", 00:20:38.757 "adrfam": "IPv4", 00:20:38.757 "traddr": "10.0.0.2", 00:20:38.757 "trsvcid": "4420" 00:20:38.757 }, 00:20:38.757 "peer_address": { 00:20:38.757 "trtype": "TCP", 00:20:38.757 "adrfam": "IPv4", 00:20:38.757 "traddr": "10.0.0.1", 00:20:38.757 "trsvcid": "41022" 00:20:38.757 }, 00:20:38.757 "auth": { 00:20:38.757 "state": "completed", 00:20:38.757 "digest": "sha384", 00:20:38.757 "dhgroup": "ffdhe6144" 00:20:38.757 } 00:20:38.758 } 00:20:38.758 ]' 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.758 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.689 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:39.689 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.620 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.186 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.751 00:20:41.751 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.751 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.751 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.009 { 00:20:42.009 "cntlid": 85, 00:20:42.009 "qid": 0, 00:20:42.009 "state": "enabled", 00:20:42.009 "thread": "nvmf_tgt_poll_group_000", 00:20:42.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:42.009 "listen_address": { 00:20:42.009 "trtype": "TCP", 00:20:42.009 "adrfam": "IPv4", 00:20:42.009 "traddr": "10.0.0.2", 00:20:42.009 "trsvcid": "4420" 00:20:42.009 }, 00:20:42.009 "peer_address": { 00:20:42.009 "trtype": "TCP", 00:20:42.009 "adrfam": "IPv4", 00:20:42.009 "traddr": "10.0.0.1", 00:20:42.009 "trsvcid": "41048" 00:20:42.009 }, 00:20:42.009 "auth": { 00:20:42.009 "state": "completed", 00:20:42.009 "digest": "sha384", 00:20:42.009 "dhgroup": "ffdhe6144" 00:20:42.009 } 00:20:42.009 } 00:20:42.009 ]' 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.009 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.266 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.266 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.266 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.831 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:42.831 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.202 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.135 00:20:45.135 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.135 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.135 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.700 { 00:20:45.700 "cntlid": 87, 00:20:45.700 "qid": 0, 00:20:45.700 "state": "enabled", 00:20:45.700 "thread": "nvmf_tgt_poll_group_000", 00:20:45.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:45.700 "listen_address": { 00:20:45.700 "trtype": "TCP", 00:20:45.700 "adrfam": "IPv4", 00:20:45.700 "traddr": "10.0.0.2", 00:20:45.700 "trsvcid": "4420" 00:20:45.700 }, 00:20:45.700 "peer_address": { 00:20:45.700 "trtype": "TCP", 00:20:45.700 "adrfam": "IPv4", 00:20:45.700 "traddr": "10.0.0.1", 00:20:45.700 "trsvcid": "41074" 00:20:45.700 }, 00:20:45.700 "auth": { 00:20:45.700 "state": "completed", 00:20:45.700 "digest": "sha384", 00:20:45.700 "dhgroup": "ffdhe6144" 00:20:45.700 } 00:20:45.700 } 00:20:45.700 ]' 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.700 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.265 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:46.265 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:20:47.199 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.456 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.457 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.714 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.086 00:20:49.086 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.086 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.086 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.343 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.343 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.343 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.343 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.343 { 00:20:49.343 "cntlid": 89, 00:20:49.343 "qid": 0, 00:20:49.343 "state": "enabled", 00:20:49.343 "thread": "nvmf_tgt_poll_group_000", 00:20:49.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:49.343 "listen_address": { 00:20:49.343 "trtype": "TCP", 00:20:49.343 "adrfam": "IPv4", 00:20:49.343 "traddr": "10.0.0.2", 00:20:49.343 "trsvcid": "4420" 00:20:49.343 }, 00:20:49.343 "peer_address": { 00:20:49.343 "trtype": "TCP", 00:20:49.343 "adrfam": "IPv4", 00:20:49.343 "traddr": "10.0.0.1", 00:20:49.343 "trsvcid": "42112" 00:20:49.343 }, 00:20:49.343 "auth": { 00:20:49.343 "state": "completed", 00:20:49.343 "digest": "sha384", 00:20:49.343 "dhgroup": "ffdhe8192" 00:20:49.343 } 00:20:49.343 } 00:20:49.343 ]' 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.343 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.908 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:49.908 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.279 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.537 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.468 00:20:52.726 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.726 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.726 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.291 { 00:20:53.291 "cntlid": 91, 00:20:53.291 "qid": 0, 00:20:53.291 "state": "enabled", 00:20:53.291 "thread": "nvmf_tgt_poll_group_000", 00:20:53.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:53.291 "listen_address": { 00:20:53.291 "trtype": "TCP", 00:20:53.291 "adrfam": "IPv4", 00:20:53.291 "traddr": "10.0.0.2", 00:20:53.291 "trsvcid": "4420" 00:20:53.291 }, 00:20:53.291 "peer_address": { 00:20:53.291 "trtype": "TCP", 00:20:53.291 "adrfam": "IPv4", 00:20:53.291 "traddr": "10.0.0.1", 00:20:53.291 "trsvcid": "42136" 00:20:53.291 }, 00:20:53.291 "auth": { 00:20:53.291 "state": "completed", 00:20:53.291 "digest": "sha384", 00:20:53.291 "dhgroup": "ffdhe8192" 00:20:53.291 } 00:20:53.291 } 00:20:53.291 ]' 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.291 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.291 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.291 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.291 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.855 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:53.855 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.227 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.485 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.418 00:20:56.418 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.418 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.418 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.984 { 00:20:56.984 "cntlid": 93, 00:20:56.984 "qid": 0, 00:20:56.984 "state": "enabled", 00:20:56.984 "thread": "nvmf_tgt_poll_group_000", 00:20:56.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:56.984 "listen_address": { 00:20:56.984 "trtype": "TCP", 00:20:56.984 "adrfam": "IPv4", 00:20:56.984 "traddr": "10.0.0.2", 00:20:56.984 "trsvcid": "4420" 00:20:56.984 }, 00:20:56.984 "peer_address": { 00:20:56.984 "trtype": "TCP", 00:20:56.984 "adrfam": "IPv4", 00:20:56.984 "traddr": "10.0.0.1", 00:20:56.984 "trsvcid": "42158" 00:20:56.984 }, 00:20:56.984 "auth": { 00:20:56.984 "state": "completed", 00:20:56.984 "digest": "sha384", 00:20:56.984 "dhgroup": "ffdhe8192" 00:20:56.984 } 00:20:56.984 } 00:20:56.984 ]' 00:20:56.984 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.241 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.861 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:57.861 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.233 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.491 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.423 00:21:00.423 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.423 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.424 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.988 { 00:21:00.988 "cntlid": 95, 00:21:00.988 "qid": 0, 00:21:00.988 "state": "enabled", 00:21:00.988 "thread": "nvmf_tgt_poll_group_000", 00:21:00.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:00.988 "listen_address": { 00:21:00.988 "trtype": "TCP", 00:21:00.988 "adrfam": "IPv4", 00:21:00.988 "traddr": "10.0.0.2", 00:21:00.988 "trsvcid": "4420" 00:21:00.988 }, 00:21:00.988 "peer_address": { 00:21:00.988 "trtype": "TCP", 00:21:00.988 "adrfam": "IPv4", 00:21:00.988 "traddr": "10.0.0.1", 00:21:00.988 "trsvcid": "35650" 00:21:00.988 }, 00:21:00.988 "auth": { 00:21:00.988 "state": "completed", 00:21:00.988 "digest": "sha384", 00:21:00.988 "dhgroup": "ffdhe8192" 00:21:00.988 } 00:21:00.988 } 00:21:00.988 ]' 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.988 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.245 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:01.245 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.636 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.893 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.823 00:21:03.823 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.823 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.823 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.081 { 00:21:04.081 "cntlid": 97, 00:21:04.081 "qid": 0, 00:21:04.081 "state": "enabled", 00:21:04.081 "thread": "nvmf_tgt_poll_group_000", 00:21:04.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:04.081 "listen_address": { 00:21:04.081 "trtype": "TCP", 00:21:04.081 "adrfam": "IPv4", 00:21:04.081 "traddr": "10.0.0.2", 00:21:04.081 "trsvcid": "4420" 00:21:04.081 }, 00:21:04.081 "peer_address": { 00:21:04.081 "trtype": "TCP", 00:21:04.081 "adrfam": "IPv4", 00:21:04.081 "traddr": "10.0.0.1", 00:21:04.081 "trsvcid": "35674" 00:21:04.081 }, 00:21:04.081 "auth": { 00:21:04.081 "state": "completed", 00:21:04.081 "digest": "sha512", 00:21:04.081 "dhgroup": "null" 00:21:04.081 } 00:21:04.081 } 00:21:04.081 ]' 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:04.081 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.339 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.339 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.339 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.596 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:04.597 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:05.528 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.786 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.042 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.043 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.043 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.607 00:21:06.607 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.607 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.608 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.172 { 00:21:07.172 "cntlid": 99, 00:21:07.172 "qid": 0, 00:21:07.172 "state": "enabled", 00:21:07.172 "thread": "nvmf_tgt_poll_group_000", 00:21:07.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:07.172 "listen_address": { 00:21:07.172 "trtype": "TCP", 00:21:07.172 "adrfam": "IPv4", 00:21:07.172 "traddr": "10.0.0.2", 00:21:07.172 "trsvcid": "4420" 00:21:07.172 }, 00:21:07.172 "peer_address": { 00:21:07.172 "trtype": "TCP", 00:21:07.172 "adrfam": "IPv4", 00:21:07.172 "traddr": "10.0.0.1", 00:21:07.172 "trsvcid": "35684" 00:21:07.172 }, 00:21:07.172 "auth": { 00:21:07.172 "state": "completed", 00:21:07.172 "digest": "sha512", 00:21:07.172 "dhgroup": "null" 00:21:07.172 } 00:21:07.172 } 00:21:07.172 ]' 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.172 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.429 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.429 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.429 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.687 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:07.687 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.059 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.623 00:21:09.623 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.624 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.624 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.881 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.881 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.881 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.881 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.139 { 00:21:10.139 "cntlid": 101, 00:21:10.139 "qid": 0, 00:21:10.139 "state": "enabled", 00:21:10.139 "thread": "nvmf_tgt_poll_group_000", 00:21:10.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:10.139 "listen_address": { 00:21:10.139 "trtype": "TCP", 00:21:10.139 "adrfam": "IPv4", 00:21:10.139 "traddr": "10.0.0.2", 00:21:10.139 "trsvcid": "4420" 00:21:10.139 }, 00:21:10.139 "peer_address": { 00:21:10.139 "trtype": "TCP", 00:21:10.139 "adrfam": "IPv4", 00:21:10.139 "traddr": "10.0.0.1", 00:21:10.139 "trsvcid": "54528" 00:21:10.139 }, 00:21:10.139 "auth": { 00:21:10.139 "state": "completed", 00:21:10.139 "digest": "sha512", 00:21:10.139 "dhgroup": "null" 00:21:10.139 } 00:21:10.139 } 00:21:10.139 ]' 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.139 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.704 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:10.704 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.075 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.332 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.589 00:21:12.590 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.590 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.590 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.847 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.847 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.847 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.847 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.104 { 00:21:13.104 "cntlid": 103, 00:21:13.104 "qid": 0, 00:21:13.104 "state": "enabled", 00:21:13.104 "thread": "nvmf_tgt_poll_group_000", 00:21:13.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:13.104 "listen_address": { 00:21:13.104 "trtype": "TCP", 00:21:13.104 "adrfam": "IPv4", 00:21:13.104 "traddr": "10.0.0.2", 00:21:13.104 "trsvcid": "4420" 00:21:13.104 }, 00:21:13.104 "peer_address": { 00:21:13.104 "trtype": "TCP", 00:21:13.104 "adrfam": "IPv4", 00:21:13.104 "traddr": "10.0.0.1", 00:21:13.104 "trsvcid": "54550" 00:21:13.104 }, 00:21:13.104 "auth": { 00:21:13.104 "state": "completed", 00:21:13.104 "digest": "sha512", 00:21:13.104 "dhgroup": "null" 00:21:13.104 } 00:21:13.104 } 00:21:13.104 ]' 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.104 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.674 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:13.674 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:14.608 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.608 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.609 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.866 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.867 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.124 00:21:15.382 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.382 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.382 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.639 { 00:21:15.639 "cntlid": 105, 00:21:15.639 "qid": 0, 00:21:15.639 "state": "enabled", 00:21:15.639 "thread": "nvmf_tgt_poll_group_000", 00:21:15.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:15.639 "listen_address": { 00:21:15.639 "trtype": "TCP", 00:21:15.639 "adrfam": "IPv4", 00:21:15.639 "traddr": "10.0.0.2", 00:21:15.639 "trsvcid": "4420" 00:21:15.639 }, 00:21:15.639 "peer_address": { 00:21:15.639 "trtype": "TCP", 00:21:15.639 "adrfam": "IPv4", 00:21:15.639 "traddr": "10.0.0.1", 00:21:15.639 "trsvcid": "54588" 00:21:15.639 }, 00:21:15.639 "auth": { 00:21:15.639 "state": "completed", 00:21:15.639 "digest": "sha512", 00:21:15.639 "dhgroup": "ffdhe2048" 00:21:15.639 } 00:21:15.639 } 00:21:15.639 ]' 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.639 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.205 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:16.205 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.577 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.577 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.510 00:21:18.510 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.510 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.510 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.768 { 00:21:18.768 "cntlid": 107, 00:21:18.768 "qid": 0, 00:21:18.768 "state": "enabled", 00:21:18.768 "thread": "nvmf_tgt_poll_group_000", 00:21:18.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:18.768 "listen_address": { 00:21:18.768 "trtype": "TCP", 00:21:18.768 "adrfam": "IPv4", 00:21:18.768 "traddr": "10.0.0.2", 00:21:18.768 "trsvcid": "4420" 00:21:18.768 }, 00:21:18.768 "peer_address": { 00:21:18.768 "trtype": "TCP", 00:21:18.768 "adrfam": "IPv4", 00:21:18.768 "traddr": "10.0.0.1", 00:21:18.768 "trsvcid": "57030" 00:21:18.768 }, 00:21:18.768 "auth": { 00:21:18.768 "state": "completed", 00:21:18.768 "digest": "sha512", 00:21:18.768 "dhgroup": "ffdhe2048" 00:21:18.768 } 00:21:18.768 } 00:21:18.768 ]' 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.768 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.335 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:19.335 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.267 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.525 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.091 00:21:21.091 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.091 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.091 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.656 { 00:21:21.656 "cntlid": 109, 00:21:21.656 "qid": 0, 00:21:21.656 "state": "enabled", 00:21:21.656 "thread": "nvmf_tgt_poll_group_000", 00:21:21.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:21.656 "listen_address": { 00:21:21.656 "trtype": "TCP", 00:21:21.656 "adrfam": "IPv4", 00:21:21.656 "traddr": "10.0.0.2", 00:21:21.656 "trsvcid": "4420" 00:21:21.656 }, 00:21:21.656 "peer_address": { 00:21:21.656 "trtype": "TCP", 00:21:21.656 "adrfam": "IPv4", 00:21:21.656 "traddr": "10.0.0.1", 00:21:21.656 "trsvcid": "57074" 00:21:21.656 }, 00:21:21.656 "auth": { 00:21:21.656 "state": "completed", 00:21:21.656 "digest": "sha512", 00:21:21.656 "dhgroup": "ffdhe2048" 00:21:21.656 } 00:21:21.656 } 00:21:21.656 ]' 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.656 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.913 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.913 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.913 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.913 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.913 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.170 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:22.170 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:23.543 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.544 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.801 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.059 00:21:24.317 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.317 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.317 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.575 { 00:21:24.575 "cntlid": 111, 00:21:24.575 "qid": 0, 00:21:24.575 "state": "enabled", 00:21:24.575 "thread": "nvmf_tgt_poll_group_000", 00:21:24.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:24.575 "listen_address": { 00:21:24.575 "trtype": "TCP", 00:21:24.575 "adrfam": "IPv4", 00:21:24.575 "traddr": "10.0.0.2", 00:21:24.575 "trsvcid": "4420" 00:21:24.575 }, 00:21:24.575 "peer_address": { 00:21:24.575 "trtype": "TCP", 00:21:24.575 "adrfam": "IPv4", 00:21:24.575 "traddr": "10.0.0.1", 00:21:24.575 "trsvcid": "57094" 00:21:24.575 }, 00:21:24.575 "auth": { 00:21:24.575 "state": "completed", 00:21:24.575 "digest": "sha512", 00:21:24.575 "dhgroup": "ffdhe2048" 00:21:24.575 } 00:21:24.575 } 00:21:24.575 ]' 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.575 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.832 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.832 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.832 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.832 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.832 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.395 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:25.395 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.768 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.769 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.769 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.702 00:21:27.702 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.702 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.702 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.960 { 00:21:27.960 "cntlid": 113, 00:21:27.960 "qid": 0, 00:21:27.960 "state": "enabled", 00:21:27.960 "thread": "nvmf_tgt_poll_group_000", 00:21:27.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:27.960 "listen_address": { 00:21:27.960 "trtype": "TCP", 00:21:27.960 "adrfam": "IPv4", 00:21:27.960 "traddr": "10.0.0.2", 00:21:27.960 "trsvcid": "4420" 00:21:27.960 }, 00:21:27.960 "peer_address": { 00:21:27.960 "trtype": "TCP", 00:21:27.960 "adrfam": "IPv4", 00:21:27.960 "traddr": "10.0.0.1", 00:21:27.960 "trsvcid": "45522" 00:21:27.960 }, 00:21:27.960 "auth": { 00:21:27.960 "state": "completed", 00:21:27.960 "digest": "sha512", 00:21:27.960 "dhgroup": "ffdhe3072" 00:21:27.960 } 00:21:27.960 } 00:21:27.960 ]' 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.960 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.217 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.217 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.218 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.218 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.218 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.511 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:28.511 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.913 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.846 00:21:30.846 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.846 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.846 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.411 { 00:21:31.411 "cntlid": 115, 00:21:31.411 "qid": 0, 00:21:31.411 "state": "enabled", 00:21:31.411 "thread": "nvmf_tgt_poll_group_000", 00:21:31.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:31.411 "listen_address": { 00:21:31.411 "trtype": "TCP", 00:21:31.411 "adrfam": "IPv4", 00:21:31.411 "traddr": "10.0.0.2", 00:21:31.411 "trsvcid": "4420" 00:21:31.411 }, 00:21:31.411 "peer_address": { 00:21:31.411 "trtype": "TCP", 00:21:31.411 "adrfam": "IPv4", 00:21:31.411 "traddr": "10.0.0.1", 00:21:31.411 "trsvcid": "45548" 00:21:31.411 }, 00:21:31.411 "auth": { 00:21:31.411 "state": "completed", 00:21:31.411 "digest": "sha512", 00:21:31.411 "dhgroup": "ffdhe3072" 00:21:31.411 } 00:21:31.411 } 00:21:31.411 ]' 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.411 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.975 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:31.975 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.345 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.603 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.166 00:21:34.166 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.166 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.166 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.730 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.730 { 00:21:34.731 "cntlid": 117, 00:21:34.731 "qid": 0, 00:21:34.731 "state": "enabled", 00:21:34.731 "thread": "nvmf_tgt_poll_group_000", 00:21:34.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:34.731 "listen_address": { 00:21:34.731 "trtype": "TCP", 00:21:34.731 "adrfam": "IPv4", 00:21:34.731 "traddr": "10.0.0.2", 00:21:34.731 "trsvcid": "4420" 00:21:34.731 }, 00:21:34.731 "peer_address": { 00:21:34.731 "trtype": "TCP", 00:21:34.731 "adrfam": "IPv4", 00:21:34.731 "traddr": "10.0.0.1", 00:21:34.731 "trsvcid": "45568" 00:21:34.731 }, 00:21:34.731 "auth": { 00:21:34.731 "state": "completed", 00:21:34.731 "digest": "sha512", 00:21:34.731 "dhgroup": "ffdhe3072" 00:21:34.731 } 00:21:34.731 } 00:21:34.731 ]' 00:21:34.731 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.731 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.731 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.988 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.988 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.988 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.988 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.989 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.247 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:35.247 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.620 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.184 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.747 00:21:37.747 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.747 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.747 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.311 { 00:21:38.311 "cntlid": 119, 00:21:38.311 "qid": 0, 00:21:38.311 "state": "enabled", 00:21:38.311 "thread": "nvmf_tgt_poll_group_000", 00:21:38.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:38.311 "listen_address": { 00:21:38.311 "trtype": "TCP", 00:21:38.311 "adrfam": "IPv4", 00:21:38.311 "traddr": "10.0.0.2", 00:21:38.311 "trsvcid": "4420" 00:21:38.311 }, 00:21:38.311 "peer_address": { 00:21:38.311 "trtype": "TCP", 00:21:38.311 "adrfam": "IPv4", 00:21:38.311 "traddr": "10.0.0.1", 00:21:38.311 "trsvcid": "43766" 00:21:38.311 }, 00:21:38.311 "auth": { 00:21:38.311 "state": "completed", 00:21:38.311 "digest": "sha512", 00:21:38.311 "dhgroup": "ffdhe3072" 00:21:38.311 } 00:21:38.311 } 00:21:38.311 ]' 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.311 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.568 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.568 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.568 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.568 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.568 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.824 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:38.824 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.195 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.452 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.016 00:21:41.016 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.016 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.016 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.273 { 00:21:41.273 "cntlid": 121, 00:21:41.273 "qid": 0, 00:21:41.273 "state": "enabled", 00:21:41.273 "thread": "nvmf_tgt_poll_group_000", 00:21:41.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:41.273 "listen_address": { 00:21:41.273 "trtype": "TCP", 00:21:41.273 "adrfam": "IPv4", 00:21:41.273 "traddr": "10.0.0.2", 00:21:41.273 "trsvcid": "4420" 00:21:41.273 }, 00:21:41.273 "peer_address": { 00:21:41.273 "trtype": "TCP", 00:21:41.273 "adrfam": "IPv4", 00:21:41.273 "traddr": "10.0.0.1", 00:21:41.273 "trsvcid": "43796" 00:21:41.273 }, 00:21:41.273 "auth": { 00:21:41.273 "state": "completed", 00:21:41.273 "digest": "sha512", 00:21:41.273 "dhgroup": "ffdhe4096" 00:21:41.273 } 00:21:41.273 } 00:21:41.273 ]' 00:21:41.273 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.532 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.097 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:42.097 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.470 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.042 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:44.042 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.042 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.042 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.043 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.301 00:21:44.301 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.301 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.301 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.867 { 00:21:44.867 "cntlid": 123, 00:21:44.867 "qid": 0, 00:21:44.867 "state": "enabled", 00:21:44.867 "thread": "nvmf_tgt_poll_group_000", 00:21:44.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:44.867 "listen_address": { 00:21:44.867 "trtype": "TCP", 00:21:44.867 "adrfam": "IPv4", 00:21:44.867 "traddr": "10.0.0.2", 00:21:44.867 "trsvcid": "4420" 00:21:44.867 }, 00:21:44.867 "peer_address": { 00:21:44.867 "trtype": "TCP", 00:21:44.867 "adrfam": "IPv4", 00:21:44.867 "traddr": "10.0.0.1", 00:21:44.867 "trsvcid": "43828" 00:21:44.867 }, 00:21:44.867 "auth": { 00:21:44.867 "state": "completed", 00:21:44.867 "digest": "sha512", 00:21:44.867 "dhgroup": "ffdhe4096" 00:21:44.867 } 00:21:44.867 } 00:21:44.867 ]' 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.867 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.433 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:45.433 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:46.372 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.372 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.372 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.372 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.631 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.631 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.631 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.631 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.889 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.455 00:21:47.455 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.455 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.455 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.713 { 00:21:47.713 "cntlid": 125, 00:21:47.713 "qid": 0, 00:21:47.713 "state": "enabled", 00:21:47.713 "thread": "nvmf_tgt_poll_group_000", 00:21:47.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:47.713 "listen_address": { 00:21:47.713 "trtype": "TCP", 00:21:47.713 "adrfam": "IPv4", 00:21:47.713 "traddr": "10.0.0.2", 00:21:47.713 "trsvcid": "4420" 00:21:47.713 }, 00:21:47.713 "peer_address": { 00:21:47.713 "trtype": "TCP", 00:21:47.713 "adrfam": "IPv4", 00:21:47.713 "traddr": "10.0.0.1", 00:21:47.713 "trsvcid": "36292" 00:21:47.713 }, 00:21:47.713 "auth": { 00:21:47.713 "state": "completed", 00:21:47.713 "digest": "sha512", 00:21:47.713 "dhgroup": "ffdhe4096" 00:21:47.713 } 00:21:47.713 } 00:21:47.713 ]' 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.713 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.970 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.970 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.970 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.228 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:48.228 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.598 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.421 00:21:50.421 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.421 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.421 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.678 { 00:21:50.678 "cntlid": 127, 00:21:50.678 "qid": 0, 00:21:50.678 "state": "enabled", 00:21:50.678 "thread": "nvmf_tgt_poll_group_000", 00:21:50.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:50.678 "listen_address": { 00:21:50.678 "trtype": "TCP", 00:21:50.678 "adrfam": "IPv4", 00:21:50.678 "traddr": "10.0.0.2", 00:21:50.678 "trsvcid": "4420" 00:21:50.678 }, 00:21:50.678 "peer_address": { 00:21:50.678 "trtype": "TCP", 00:21:50.678 "adrfam": "IPv4", 00:21:50.678 "traddr": "10.0.0.1", 00:21:50.678 "trsvcid": "36312" 00:21:50.678 }, 00:21:50.678 "auth": { 00:21:50.678 "state": "completed", 00:21:50.678 "digest": "sha512", 00:21:50.678 "dhgroup": "ffdhe4096" 00:21:50.678 } 00:21:50.678 } 00:21:50.678 ]' 00:21:50.678 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.935 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.936 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.501 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:51.501 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.873 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.874 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.874 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.132 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.132 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.132 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.700 00:21:53.700 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.700 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.700 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.961 { 00:21:53.961 "cntlid": 129, 00:21:53.961 "qid": 0, 00:21:53.961 "state": "enabled", 00:21:53.961 "thread": "nvmf_tgt_poll_group_000", 00:21:53.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:53.961 "listen_address": { 00:21:53.961 "trtype": "TCP", 00:21:53.961 "adrfam": "IPv4", 00:21:53.961 "traddr": "10.0.0.2", 00:21:53.961 "trsvcid": "4420" 00:21:53.961 }, 00:21:53.961 "peer_address": { 00:21:53.961 "trtype": "TCP", 00:21:53.961 "adrfam": "IPv4", 00:21:53.961 "traddr": "10.0.0.1", 00:21:53.961 "trsvcid": "36360" 00:21:53.961 }, 00:21:53.961 "auth": { 00:21:53.961 "state": "completed", 00:21:53.961 "digest": "sha512", 00:21:53.961 "dhgroup": "ffdhe6144" 00:21:53.961 } 00:21:53.961 } 00:21:53.961 ]' 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.961 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.218 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.218 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.218 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.474 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:54.475 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.847 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.105 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.672 00:21:56.930 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.930 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.930 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.496 { 00:21:57.496 "cntlid": 131, 00:21:57.496 "qid": 0, 00:21:57.496 "state": "enabled", 00:21:57.496 "thread": "nvmf_tgt_poll_group_000", 00:21:57.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:57.496 "listen_address": { 00:21:57.496 "trtype": "TCP", 00:21:57.496 "adrfam": "IPv4", 00:21:57.496 "traddr": "10.0.0.2", 00:21:57.496 "trsvcid": "4420" 00:21:57.496 }, 00:21:57.496 "peer_address": { 00:21:57.496 "trtype": "TCP", 00:21:57.496 "adrfam": "IPv4", 00:21:57.496 "traddr": "10.0.0.1", 00:21:57.496 "trsvcid": "36390" 00:21:57.496 }, 00:21:57.496 "auth": { 00:21:57.496 "state": "completed", 00:21:57.496 "digest": "sha512", 00:21:57.496 "dhgroup": "ffdhe6144" 00:21:57.496 } 00:21:57.496 } 00:21:57.496 ]' 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.496 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.429 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:58.429 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:21:59.362 09:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.362 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.619 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.620 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.550 00:22:00.550 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.550 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.550 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.840 { 00:22:00.840 "cntlid": 133, 00:22:00.840 "qid": 0, 00:22:00.840 "state": "enabled", 00:22:00.840 "thread": "nvmf_tgt_poll_group_000", 00:22:00.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:00.840 "listen_address": { 00:22:00.840 "trtype": "TCP", 00:22:00.840 "adrfam": "IPv4", 00:22:00.840 "traddr": "10.0.0.2", 00:22:00.840 "trsvcid": "4420" 00:22:00.840 }, 00:22:00.840 "peer_address": { 00:22:00.840 "trtype": "TCP", 00:22:00.840 "adrfam": "IPv4", 00:22:00.840 "traddr": "10.0.0.1", 00:22:00.840 "trsvcid": "54320" 00:22:00.840 }, 00:22:00.840 "auth": { 00:22:00.840 "state": "completed", 00:22:00.840 "digest": "sha512", 00:22:00.840 "dhgroup": "ffdhe6144" 00:22:00.840 } 00:22:00.840 } 00:22:00.840 ]' 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.840 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.430 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:22:01.430 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.802 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.059 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.992 00:22:03.992 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.992 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.992 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.557 { 00:22:04.557 "cntlid": 135, 00:22:04.557 "qid": 0, 00:22:04.557 "state": "enabled", 00:22:04.557 "thread": "nvmf_tgt_poll_group_000", 00:22:04.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:04.557 "listen_address": { 00:22:04.557 "trtype": "TCP", 00:22:04.557 "adrfam": "IPv4", 00:22:04.557 "traddr": "10.0.0.2", 00:22:04.557 "trsvcid": "4420" 00:22:04.557 }, 00:22:04.557 "peer_address": { 00:22:04.557 "trtype": "TCP", 00:22:04.557 "adrfam": "IPv4", 00:22:04.557 "traddr": "10.0.0.1", 00:22:04.557 "trsvcid": "54332" 00:22:04.557 }, 00:22:04.557 "auth": { 00:22:04.557 "state": "completed", 00:22:04.557 "digest": "sha512", 00:22:04.557 "dhgroup": "ffdhe6144" 00:22:04.557 } 00:22:04.557 } 00:22:04.557 ]' 00:22:04.557 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.814 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.380 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:05.380 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.752 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.011 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.383 00:22:08.383 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.383 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.383 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.641 { 00:22:08.641 "cntlid": 137, 00:22:08.641 "qid": 0, 00:22:08.641 "state": "enabled", 00:22:08.641 "thread": "nvmf_tgt_poll_group_000", 00:22:08.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:08.641 "listen_address": { 00:22:08.641 "trtype": "TCP", 00:22:08.641 "adrfam": "IPv4", 00:22:08.641 "traddr": "10.0.0.2", 00:22:08.641 "trsvcid": "4420" 00:22:08.641 }, 00:22:08.641 "peer_address": { 00:22:08.641 "trtype": "TCP", 00:22:08.641 "adrfam": "IPv4", 00:22:08.641 "traddr": "10.0.0.1", 00:22:08.641 "trsvcid": "58642" 00:22:08.641 }, 00:22:08.641 "auth": { 00:22:08.641 "state": "completed", 00:22:08.641 "digest": "sha512", 00:22:08.641 "dhgroup": "ffdhe8192" 00:22:08.641 } 00:22:08.641 } 00:22:08.641 ]' 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.641 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.900 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.900 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.900 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.158 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:22:09.158 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:22:10.529 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.529 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.463 00:22:11.720 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.721 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.721 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.285 { 00:22:12.285 "cntlid": 139, 00:22:12.285 "qid": 0, 00:22:12.285 "state": "enabled", 00:22:12.285 "thread": "nvmf_tgt_poll_group_000", 00:22:12.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:12.285 "listen_address": { 00:22:12.285 "trtype": "TCP", 00:22:12.285 "adrfam": "IPv4", 00:22:12.285 "traddr": "10.0.0.2", 00:22:12.285 "trsvcid": "4420" 00:22:12.285 }, 00:22:12.285 "peer_address": { 00:22:12.285 "trtype": "TCP", 00:22:12.285 "adrfam": "IPv4", 00:22:12.285 "traddr": "10.0.0.1", 00:22:12.285 "trsvcid": "58666" 00:22:12.285 }, 00:22:12.285 "auth": { 00:22:12.285 "state": "completed", 00:22:12.285 "digest": "sha512", 00:22:12.285 "dhgroup": "ffdhe8192" 00:22:12.285 } 00:22:12.285 } 00:22:12.285 ]' 00:22:12.285 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.285 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.285 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.285 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.285 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.543 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.543 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.543 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.801 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:22:12.801 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: --dhchap-ctrl-secret DHHC-1:02:ZDJkMjg4NWExZGQ5YjQ0NjVkYjlkZjVkZDkwNzM3MjkzNTA4NzYxY2QyMjRlOTk5Y2Tbdw==: 00:22:13.733 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.991 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.248 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.621 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.621 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.879 { 00:22:15.879 "cntlid": 141, 00:22:15.879 "qid": 0, 00:22:15.879 "state": "enabled", 00:22:15.879 "thread": "nvmf_tgt_poll_group_000", 00:22:15.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:15.879 "listen_address": { 00:22:15.879 "trtype": "TCP", 00:22:15.879 "adrfam": "IPv4", 00:22:15.879 "traddr": "10.0.0.2", 00:22:15.879 "trsvcid": "4420" 00:22:15.879 }, 00:22:15.879 "peer_address": { 00:22:15.879 "trtype": "TCP", 00:22:15.879 "adrfam": "IPv4", 00:22:15.879 "traddr": "10.0.0.1", 00:22:15.879 "trsvcid": "58708" 00:22:15.879 }, 00:22:15.879 "auth": { 00:22:15.879 "state": "completed", 00:22:15.879 "digest": "sha512", 00:22:15.879 "dhgroup": "ffdhe8192" 00:22:15.879 } 00:22:15.879 } 00:22:15.879 ]' 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.879 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.443 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:22:16.443 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:01:ZjgwYmI3NDE3Y2I5NjE3MGJlY2MwZGE1OTI2ODIyMDKyTLMO: 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.813 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.070 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.001 00:22:19.001 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.001 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.001 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.565 { 00:22:19.565 "cntlid": 143, 00:22:19.565 "qid": 0, 00:22:19.565 "state": "enabled", 00:22:19.565 "thread": "nvmf_tgt_poll_group_000", 00:22:19.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:19.565 "listen_address": { 00:22:19.565 "trtype": "TCP", 00:22:19.565 "adrfam": "IPv4", 00:22:19.565 "traddr": "10.0.0.2", 00:22:19.565 "trsvcid": "4420" 00:22:19.565 }, 00:22:19.565 "peer_address": { 00:22:19.565 "trtype": "TCP", 00:22:19.565 "adrfam": "IPv4", 00:22:19.565 "traddr": "10.0.0.1", 00:22:19.565 "trsvcid": "37888" 00:22:19.565 }, 00:22:19.565 "auth": { 00:22:19.565 "state": "completed", 00:22:19.565 "digest": "sha512", 00:22:19.565 "dhgroup": "ffdhe8192" 00:22:19.565 } 00:22:19.565 } 00:22:19.565 ]' 00:22:19.565 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.823 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.388 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:20.388 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:21.320 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.320 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:21.320 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.320 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.577 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.142 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.513 00:22:23.513 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.513 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.513 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.770 { 00:22:23.770 "cntlid": 145, 00:22:23.770 "qid": 0, 00:22:23.770 "state": "enabled", 00:22:23.770 "thread": "nvmf_tgt_poll_group_000", 00:22:23.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:23.770 "listen_address": { 00:22:23.770 "trtype": "TCP", 00:22:23.770 "adrfam": "IPv4", 00:22:23.770 "traddr": "10.0.0.2", 00:22:23.770 "trsvcid": "4420" 00:22:23.770 }, 00:22:23.770 "peer_address": { 00:22:23.770 "trtype": "TCP", 00:22:23.770 "adrfam": "IPv4", 00:22:23.770 "traddr": "10.0.0.1", 00:22:23.770 "trsvcid": "37922" 00:22:23.770 }, 00:22:23.770 "auth": { 00:22:23.770 "state": "completed", 00:22:23.770 "digest": "sha512", 00:22:23.770 "dhgroup": "ffdhe8192" 00:22:23.770 } 00:22:23.770 } 00:22:23.770 ]' 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.770 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.334 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:22:24.334 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:YmVlOTA5MTgwNjk2ZTYyNWQwY2MzMjlhYmVjOTE0MzgyMDg2N2NjZjdjMDYyMmM3ZKf9Ng==: --dhchap-ctrl-secret DHHC-1:03:YmUyNjc3Zjc4NDQyZDk3MzUyNGMzYzdlZThlOGFlYWZhZjk3NmY3MjQyNmNiZGY2OTExODZlZGRkOTdlMzRkMxn7Tck=: 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:25.267 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:26.201 request: 00:22:26.201 { 00:22:26.201 "name": "nvme0", 00:22:26.201 "trtype": "tcp", 00:22:26.201 "traddr": "10.0.0.2", 00:22:26.201 "adrfam": "ipv4", 00:22:26.201 "trsvcid": "4420", 00:22:26.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:26.201 "prchk_reftag": false, 00:22:26.201 "prchk_guard": false, 00:22:26.201 "hdgst": false, 00:22:26.201 "ddgst": false, 00:22:26.201 "dhchap_key": "key2", 00:22:26.201 "allow_unrecognized_csi": false, 00:22:26.201 "method": "bdev_nvme_attach_controller", 00:22:26.201 "req_id": 1 00:22:26.201 } 00:22:26.201 Got JSON-RPC error response 00:22:26.201 response: 00:22:26.201 { 00:22:26.201 "code": -5, 00:22:26.201 "message": "Input/output error" 00:22:26.201 } 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:26.460 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:27.395 request: 00:22:27.395 { 00:22:27.395 "name": "nvme0", 00:22:27.395 "trtype": "tcp", 00:22:27.395 "traddr": "10.0.0.2", 00:22:27.395 "adrfam": "ipv4", 00:22:27.395 "trsvcid": "4420", 00:22:27.395 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:27.395 "prchk_reftag": false, 00:22:27.395 "prchk_guard": false, 00:22:27.395 "hdgst": false, 00:22:27.395 "ddgst": false, 00:22:27.395 "dhchap_key": "key1", 00:22:27.395 "dhchap_ctrlr_key": "ckey2", 00:22:27.395 "allow_unrecognized_csi": false, 00:22:27.395 "method": "bdev_nvme_attach_controller", 00:22:27.395 "req_id": 1 00:22:27.395 } 00:22:27.395 Got JSON-RPC error response 00:22:27.395 response: 00:22:27.395 { 00:22:27.395 "code": -5, 00:22:27.395 "message": "Input/output error" 00:22:27.395 } 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.654 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.587 request: 00:22:28.587 { 00:22:28.587 "name": "nvme0", 00:22:28.587 "trtype": "tcp", 00:22:28.587 "traddr": "10.0.0.2", 00:22:28.587 "adrfam": "ipv4", 00:22:28.587 "trsvcid": "4420", 00:22:28.587 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:28.587 "prchk_reftag": false, 00:22:28.587 "prchk_guard": false, 00:22:28.587 "hdgst": false, 00:22:28.587 "ddgst": false, 00:22:28.587 "dhchap_key": "key1", 00:22:28.587 "dhchap_ctrlr_key": "ckey1", 00:22:28.587 "allow_unrecognized_csi": false, 00:22:28.587 "method": "bdev_nvme_attach_controller", 00:22:28.588 "req_id": 1 00:22:28.588 } 00:22:28.588 Got JSON-RPC error response 00:22:28.588 response: 00:22:28.588 { 00:22:28.588 "code": -5, 00:22:28.588 "message": "Input/output error" 00:22:28.588 } 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1527489 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1527489 ']' 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1527489 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527489 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527489' 00:22:28.588 killing process with pid 1527489 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1527489 00:22:28.588 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1527489 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1557013 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1557013 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1557013 ']' 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.845 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1557013 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1557013 ']' 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.410 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.668 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.668 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:29.668 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:29.668 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.668 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.927 null0 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nmi 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.rju ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rju 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NMT 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ZLJ ]] 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZLJ 00:22:29.927 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cub 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1Yz ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Yz 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pEI 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.928 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.826 nvme0n1 00:22:31.826 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.826 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.826 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.084 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.084 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.084 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.084 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.341 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.341 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.341 { 00:22:32.341 "cntlid": 1, 00:22:32.341 "qid": 0, 00:22:32.341 "state": "enabled", 00:22:32.341 "thread": "nvmf_tgt_poll_group_000", 00:22:32.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:32.341 "listen_address": { 00:22:32.341 "trtype": "TCP", 00:22:32.341 "adrfam": "IPv4", 00:22:32.341 "traddr": "10.0.0.2", 00:22:32.341 "trsvcid": "4420" 00:22:32.341 }, 00:22:32.341 "peer_address": { 00:22:32.341 "trtype": "TCP", 00:22:32.341 "adrfam": "IPv4", 00:22:32.341 "traddr": "10.0.0.1", 00:22:32.341 "trsvcid": "48232" 00:22:32.341 }, 00:22:32.341 "auth": { 00:22:32.341 "state": "completed", 00:22:32.341 "digest": "sha512", 00:22:32.341 "dhgroup": "ffdhe8192" 00:22:32.341 } 00:22:32.341 } 00:22:32.341 ]' 00:22:32.341 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.341 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.341 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.341 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.341 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.341 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.341 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.341 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:32.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:34.011 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.577 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.835 request: 00:22:34.835 { 00:22:34.835 "name": "nvme0", 00:22:34.835 "trtype": "tcp", 00:22:34.835 "traddr": "10.0.0.2", 00:22:34.835 "adrfam": "ipv4", 00:22:34.835 "trsvcid": "4420", 00:22:34.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:34.835 "prchk_reftag": false, 00:22:34.835 "prchk_guard": false, 00:22:34.835 "hdgst": false, 00:22:34.835 "ddgst": false, 00:22:34.835 "dhchap_key": "key3", 00:22:34.835 "allow_unrecognized_csi": false, 00:22:34.835 "method": "bdev_nvme_attach_controller", 00:22:34.835 "req_id": 1 00:22:34.835 } 00:22:34.835 Got JSON-RPC error response 00:22:34.835 response: 00:22:34.835 { 00:22:34.835 "code": -5, 00:22:34.835 "message": "Input/output error" 00:22:34.835 } 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:34.835 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.404 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.974 request: 00:22:35.974 { 00:22:35.974 "name": "nvme0", 00:22:35.974 "trtype": "tcp", 00:22:35.974 "traddr": "10.0.0.2", 00:22:35.974 "adrfam": "ipv4", 00:22:35.974 "trsvcid": "4420", 00:22:35.974 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:35.974 "prchk_reftag": false, 00:22:35.974 "prchk_guard": false, 00:22:35.974 "hdgst": false, 00:22:35.974 "ddgst": false, 00:22:35.974 "dhchap_key": "key3", 00:22:35.974 "allow_unrecognized_csi": false, 00:22:35.974 "method": "bdev_nvme_attach_controller", 00:22:35.974 "req_id": 1 00:22:35.974 } 00:22:35.974 Got JSON-RPC error response 00:22:35.974 response: 00:22:35.974 { 00:22:35.974 "code": -5, 00:22:35.974 "message": "Input/output error" 00:22:35.974 } 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.974 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.543 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:37.110 request: 00:22:37.110 { 00:22:37.110 "name": "nvme0", 00:22:37.110 "trtype": "tcp", 00:22:37.110 "traddr": "10.0.0.2", 00:22:37.110 "adrfam": "ipv4", 00:22:37.110 "trsvcid": "4420", 00:22:37.110 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:37.110 "prchk_reftag": false, 00:22:37.110 "prchk_guard": false, 00:22:37.110 "hdgst": false, 00:22:37.110 "ddgst": false, 00:22:37.110 "dhchap_key": "key0", 00:22:37.110 "dhchap_ctrlr_key": "key1", 00:22:37.110 "allow_unrecognized_csi": false, 00:22:37.110 "method": "bdev_nvme_attach_controller", 00:22:37.110 "req_id": 1 00:22:37.110 } 00:22:37.110 Got JSON-RPC error response 00:22:37.110 response: 00:22:37.110 { 00:22:37.110 "code": -5, 00:22:37.110 "message": "Input/output error" 00:22:37.110 } 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:37.110 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:37.111 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:37.677 nvme0n1 00:22:37.677 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:37.677 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:37.677 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.935 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.935 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.935 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:38.500 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:39.875 nvme0n1 00:22:40.132 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:40.132 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:40.132 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:40.391 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.649 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.649 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:40.649 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: --dhchap-ctrl-secret DHHC-1:03:ODMwODgwMmQ1NDIxMTYxYzQ0YzI2ZTljMjhhZTZmMjE5MDYxNjBkMDA1OWExMDJkZGI4M2I3ODdmNmZkMGRlM+4HeQk=: 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:42.023 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:42.957 request: 00:22:42.957 { 00:22:42.957 "name": "nvme0", 00:22:42.957 "trtype": "tcp", 00:22:42.957 "traddr": "10.0.0.2", 00:22:42.957 "adrfam": "ipv4", 00:22:42.957 "trsvcid": "4420", 00:22:42.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:42.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:42.957 "prchk_reftag": false, 00:22:42.957 "prchk_guard": false, 00:22:42.957 "hdgst": false, 00:22:42.957 "ddgst": false, 00:22:42.957 "dhchap_key": "key1", 00:22:42.957 "allow_unrecognized_csi": false, 00:22:42.957 "method": "bdev_nvme_attach_controller", 00:22:42.957 "req_id": 1 00:22:42.957 } 00:22:42.957 Got JSON-RPC error response 00:22:42.957 response: 00:22:42.957 { 00:22:42.957 "code": -5, 00:22:42.957 "message": "Input/output error" 00:22:42.957 } 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:42.957 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:44.330 nvme0n1 00:22:44.331 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:44.331 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:44.331 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.896 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.896 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.896 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:45.463 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:45.721 nvme0n1 00:22:45.721 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:45.721 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:45.721 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.288 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.288 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.288 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: '' 2s 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: ]] 00:22:46.288 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTQwZGUyYmJmZGY1NjcwODQ1NGViMWU4ZjVjMzJlYWUoX1pC: 00:22:46.546 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:46.546 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:46.546 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: 2s 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: ]] 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDE1NGUxYWQ3OWI1NGUxNjFiNGY1ZWI2OWMwMWRkZWFiNDczODk5ZmQ3OWJkMzFl0FNdaQ==: 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:48.446 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:50.346 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:50.604 09:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:51.976 nvme0n1 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.234 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:53.606 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:53.606 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:53.606 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:53.864 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:54.429 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:54.430 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:54.430 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:54.995 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:55.929 request: 00:22:55.929 { 00:22:55.929 "name": "nvme0", 00:22:55.929 "dhchap_key": "key1", 00:22:55.929 "dhchap_ctrlr_key": "key3", 00:22:55.929 "method": "bdev_nvme_set_keys", 00:22:55.929 "req_id": 1 00:22:55.929 } 00:22:55.929 Got JSON-RPC error response 00:22:55.929 response: 00:22:55.929 { 00:22:55.929 "code": -13, 00:22:55.929 "message": "Permission denied" 00:22:55.929 } 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.929 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:56.187 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:56.187 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:57.563 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:57.563 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:57.563 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:57.822 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.720 nvme0n1 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.720 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.652 request: 00:23:00.652 { 00:23:00.652 "name": "nvme0", 00:23:00.652 "dhchap_key": "key2", 00:23:00.652 "dhchap_ctrlr_key": "key0", 00:23:00.652 "method": "bdev_nvme_set_keys", 00:23:00.652 "req_id": 1 00:23:00.652 } 00:23:00.652 Got JSON-RPC error response 00:23:00.652 response: 00:23:00.652 { 00:23:00.652 "code": -13, 00:23:00.652 "message": "Permission denied" 00:23:00.652 } 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:00.652 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.217 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:01.217 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:02.245 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:02.245 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:02.245 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1527524 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1527524 ']' 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1527524 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527524 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527524' 00:23:02.503 killing process with pid 1527524 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1527524 00:23:02.503 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1527524 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.069 rmmod nvme_tcp 00:23:03.069 rmmod nvme_fabrics 00:23:03.069 rmmod nvme_keyring 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1557013 ']' 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1557013 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1557013 ']' 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1557013 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557013 00:23:03.069 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:03.070 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:03.070 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557013' 00:23:03.070 killing process with pid 1557013 00:23:03.070 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1557013 00:23:03.070 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1557013 00:23:03.328 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:03.328 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:03.328 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:03.328 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:03.328 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.329 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.864 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nmi /tmp/spdk.key-sha256.NMT /tmp/spdk.key-sha384.cub /tmp/spdk.key-sha512.pEI /tmp/spdk.key-sha512.rju /tmp/spdk.key-sha384.ZLJ /tmp/spdk.key-sha256.1Yz '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:05.865 00:23:05.865 real 4m45.893s 00:23:05.865 user 11m30.228s 00:23:05.865 sys 0m35.445s 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.865 ************************************ 00:23:05.865 END TEST nvmf_auth_target 00:23:05.865 ************************************ 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.865 ************************************ 00:23:05.865 START TEST nvmf_bdevio_no_huge 00:23:05.865 ************************************ 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:05.865 * Looking for test storage... 00:23:05.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:05.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.865 --rc genhtml_branch_coverage=1 00:23:05.865 --rc genhtml_function_coverage=1 00:23:05.865 --rc genhtml_legend=1 00:23:05.865 --rc geninfo_all_blocks=1 00:23:05.865 --rc geninfo_unexecuted_blocks=1 00:23:05.865 00:23:05.865 ' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:05.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.865 --rc genhtml_branch_coverage=1 00:23:05.865 --rc genhtml_function_coverage=1 00:23:05.865 --rc genhtml_legend=1 00:23:05.865 --rc geninfo_all_blocks=1 00:23:05.865 --rc geninfo_unexecuted_blocks=1 00:23:05.865 00:23:05.865 ' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:05.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.865 --rc genhtml_branch_coverage=1 00:23:05.865 --rc genhtml_function_coverage=1 00:23:05.865 --rc genhtml_legend=1 00:23:05.865 --rc geninfo_all_blocks=1 00:23:05.865 --rc geninfo_unexecuted_blocks=1 00:23:05.865 00:23:05.865 ' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:05.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.865 --rc genhtml_branch_coverage=1 00:23:05.865 --rc genhtml_function_coverage=1 00:23:05.865 --rc genhtml_legend=1 00:23:05.865 --rc geninfo_all_blocks=1 00:23:05.865 --rc geninfo_unexecuted_blocks=1 00:23:05.865 00:23:05.865 ' 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.865 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.866 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:08.403 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:08.403 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:08.403 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:08.404 Found net devices under 0000:84:00.0: cvl_0_0 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:08.404 Found net devices under 0000:84:00.1: cvl_0_1 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.404 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:23:08.404 00:23:08.404 --- 10.0.0.2 ping statistics --- 00:23:08.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.404 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:23:08.404 00:23:08.404 --- 10.0.0.1 ping statistics --- 00:23:08.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.404 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1563077 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1563077 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1563077 ']' 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.404 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:08.404 [2024-10-07 09:44:03.190454] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:08.404 [2024-10-07 09:44:03.190554] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:08.712 [2024-10-07 09:44:03.268402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.712 [2024-10-07 09:44:03.377531] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.712 [2024-10-07 09:44:03.377592] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.712 [2024-10-07 09:44:03.377621] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.712 [2024-10-07 09:44:03.377633] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.712 [2024-10-07 09:44:03.377643] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.712 [2024-10-07 09:44:03.378877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.712 [2024-10-07 09:44:03.378939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:23:08.712 [2024-10-07 09:44:03.378966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:23:08.712 [2024-10-07 09:44:03.378969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.999 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.999 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:08.999 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:08.999 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.999 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 [2024-10-07 09:44:03.542915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 Malloc0 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:09.000 [2024-10-07 09:44:03.581153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:09.000 { 00:23:09.000 "params": { 00:23:09.000 "name": "Nvme$subsystem", 00:23:09.000 "trtype": "$TEST_TRANSPORT", 00:23:09.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.000 "adrfam": "ipv4", 00:23:09.000 "trsvcid": "$NVMF_PORT", 00:23:09.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.000 "hdgst": ${hdgst:-false}, 00:23:09.000 "ddgst": ${ddgst:-false} 00:23:09.000 }, 00:23:09.000 "method": "bdev_nvme_attach_controller" 00:23:09.000 } 00:23:09.000 EOF 00:23:09.000 )") 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:09.000 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:09.000 "params": { 00:23:09.000 "name": "Nvme1", 00:23:09.000 "trtype": "tcp", 00:23:09.000 "traddr": "10.0.0.2", 00:23:09.000 "adrfam": "ipv4", 00:23:09.000 "trsvcid": "4420", 00:23:09.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.000 "hdgst": false, 00:23:09.000 "ddgst": false 00:23:09.000 }, 00:23:09.000 "method": "bdev_nvme_attach_controller" 00:23:09.000 }' 00:23:09.000 [2024-10-07 09:44:03.636113] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:09.000 [2024-10-07 09:44:03.636217] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1563108 ] 00:23:09.000 [2024-10-07 09:44:03.716374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:09.257 [2024-10-07 09:44:03.830921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.257 [2024-10-07 09:44:03.830948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.257 [2024-10-07 09:44:03.830952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.514 I/O targets: 00:23:09.514 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:09.514 00:23:09.514 00:23:09.514 CUnit - A unit testing framework for C - Version 2.1-3 00:23:09.514 http://cunit.sourceforge.net/ 00:23:09.514 00:23:09.514 00:23:09.514 Suite: bdevio tests on: Nvme1n1 00:23:09.514 Test: blockdev write read block ...passed 00:23:09.514 Test: blockdev write zeroes read block ...passed 00:23:09.514 Test: blockdev write zeroes read no split ...passed 00:23:09.514 Test: blockdev write zeroes read split ...passed 00:23:09.514 Test: blockdev write zeroes read split partial ...passed 00:23:09.514 Test: blockdev reset ...[2024-10-07 09:44:04.306715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.514 [2024-10-07 09:44:04.306829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc3f00 (9): Bad file descriptor 00:23:09.771 [2024-10-07 09:44:04.376050] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:09.771 passed 00:23:09.771 Test: blockdev write read 8 blocks ...passed 00:23:09.771 Test: blockdev write read size > 128k ...passed 00:23:09.771 Test: blockdev write read invalid size ...passed 00:23:09.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:09.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:09.771 Test: blockdev write read max offset ...passed 00:23:09.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:09.771 Test: blockdev writev readv 8 blocks ...passed 00:23:09.771 Test: blockdev writev readv 30 x 1block ...passed 00:23:10.029 Test: blockdev writev readv block ...passed 00:23:10.029 Test: blockdev writev readv size > 128k ...passed 00:23:10.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:10.029 Test: blockdev comparev and writev ...[2024-10-07 09:44:04.671799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.671860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.671878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.672353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.672377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.672406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.672423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.672858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.672883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.672911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.672929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.673394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.673419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.673441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:10.029 [2024-10-07 09:44:04.673456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:10.029 passed 00:23:10.029 Test: blockdev nvme passthru rw ...passed 00:23:10.029 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:44:04.755212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:10.029 [2024-10-07 09:44:04.755241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.755386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:10.029 [2024-10-07 09:44:04.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.755550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:10.029 [2024-10-07 09:44:04.755574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:10.029 [2024-10-07 09:44:04.755714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:10.029 [2024-10-07 09:44:04.755738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:10.029 passed 00:23:10.029 Test: blockdev nvme admin passthru ...passed 00:23:10.029 Test: blockdev copy ...passed 00:23:10.029 00:23:10.029 Run Summary: Type Total Ran Passed Failed Inactive 00:23:10.029 suites 1 1 n/a 0 0 00:23:10.029 tests 23 23 23 0 0 00:23:10.029 asserts 152 152 152 0 n/a 00:23:10.029 00:23:10.029 Elapsed time = 1.323 seconds 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.594 rmmod nvme_tcp 00:23:10.594 rmmod nvme_fabrics 00:23:10.594 rmmod nvme_keyring 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1563077 ']' 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1563077 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1563077 ']' 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1563077 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1563077 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1563077' 00:23:10.594 killing process with pid 1563077 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1563077 00:23:10.594 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1563077 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.161 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.066 00:23:13.066 real 0m7.647s 00:23:13.066 user 0m12.722s 00:23:13.066 sys 0m3.187s 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.066 ************************************ 00:23:13.066 END TEST nvmf_bdevio_no_huge 00:23:13.066 ************************************ 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.066 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.066 ************************************ 00:23:13.066 START TEST nvmf_tls 00:23:13.066 ************************************ 00:23:13.067 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:13.327 * Looking for test storage... 00:23:13.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.327 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:13.327 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:13.327 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.327 --rc genhtml_branch_coverage=1 00:23:13.327 --rc genhtml_function_coverage=1 00:23:13.327 --rc genhtml_legend=1 00:23:13.327 --rc geninfo_all_blocks=1 00:23:13.327 --rc geninfo_unexecuted_blocks=1 00:23:13.327 00:23:13.327 ' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.327 --rc genhtml_branch_coverage=1 00:23:13.327 --rc genhtml_function_coverage=1 00:23:13.327 --rc genhtml_legend=1 00:23:13.327 --rc geninfo_all_blocks=1 00:23:13.327 --rc geninfo_unexecuted_blocks=1 00:23:13.327 00:23:13.327 ' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.327 --rc genhtml_branch_coverage=1 00:23:13.327 --rc genhtml_function_coverage=1 00:23:13.327 --rc genhtml_legend=1 00:23:13.327 --rc geninfo_all_blocks=1 00:23:13.327 --rc geninfo_unexecuted_blocks=1 00:23:13.327 00:23:13.327 ' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.327 --rc genhtml_branch_coverage=1 00:23:13.327 --rc genhtml_function_coverage=1 00:23:13.327 --rc genhtml_legend=1 00:23:13.327 --rc geninfo_all_blocks=1 00:23:13.327 --rc geninfo_unexecuted_blocks=1 00:23:13.327 00:23:13.327 ' 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.327 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.328 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.860 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:15.861 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:15.861 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:15.861 Found net devices under 0000:84:00.0: cvl_0_0 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:15.861 Found net devices under 0000:84:00.1: cvl_0_1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:23:15.861 00:23:15.861 --- 10.0.0.2 ping statistics --- 00:23:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.861 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:15.861 00:23:15.861 --- 10.0.0.1 ping statistics --- 00:23:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.861 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1565329 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1565329 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1565329 ']' 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.861 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 [2024-10-07 09:44:10.711686] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:16.119 [2024-10-07 09:44:10.711771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.119 [2024-10-07 09:44:10.778885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.119 [2024-10-07 09:44:10.890454] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.119 [2024-10-07 09:44:10.890526] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.119 [2024-10-07 09:44:10.890539] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.119 [2024-10-07 09:44:10.890550] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.119 [2024-10-07 09:44:10.890559] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.119 [2024-10-07 09:44:10.891298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:16.378 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:16.635 true 00:23:16.635 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:16.635 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:17.200 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:17.200 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:17.200 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:17.768 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:17.768 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:18.368 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:18.368 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:18.368 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:18.368 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:18.368 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:19.302 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:19.302 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:19.302 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:19.302 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:19.560 09:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:19.560 09:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:19.560 09:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:20.128 09:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.128 09:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:20.694 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:20.694 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:20.694 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:21.265 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.265 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:21.833 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.q4gIwy6V06 00:23:22.092 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.y0nrnNOFoR 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.q4gIwy6V06 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.y0nrnNOFoR 00:23:22.093 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:22.661 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:23.228 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.q4gIwy6V06 00:23:23.228 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q4gIwy6V06 00:23:23.228 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.487 [2024-10-07 09:44:18.084777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.487 09:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.745 09:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.312 [2024-10-07 09:44:18.999716] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.312 [2024-10-07 09:44:19.000106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.312 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.878 malloc0 00:23:24.878 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.137 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q4gIwy6V06 00:23:25.705 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.274 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.q4gIwy6V06 00:23:38.470 Initializing NVMe Controllers 00:23:38.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.470 Initialization complete. Launching workers. 00:23:38.470 ======================================================== 00:23:38.470 Latency(us) 00:23:38.470 Device Information : IOPS MiB/s Average min max 00:23:38.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7439.56 29.06 8604.78 1238.14 9592.52 00:23:38.470 ======================================================== 00:23:38.471 Total : 7439.56 29.06 8604.78 1238.14 9592.52 00:23:38.471 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q4gIwy6V06 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q4gIwy6V06 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1567736 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1567736 /var/tmp/bdevperf.sock 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1567736 ']' 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.471 [2024-10-07 09:44:31.173443] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:38.471 [2024-10-07 09:44:31.173607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567736 ] 00:23:38.471 [2024-10-07 09:44:31.260884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.471 [2024-10-07 09:44:31.372632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q4gIwy6V06 00:23:38.471 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.471 [2024-10-07 09:44:32.258517] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.471 TLSTESTn1 00:23:38.471 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.471 Running I/O for 10 seconds... 00:23:48.012 3603.00 IOPS, 14.07 MiB/s 3606.50 IOPS, 14.09 MiB/s 3618.33 IOPS, 14.13 MiB/s 3608.75 IOPS, 14.10 MiB/s 3610.60 IOPS, 14.10 MiB/s 3617.67 IOPS, 14.13 MiB/s 3615.86 IOPS, 14.12 MiB/s 3617.50 IOPS, 14.13 MiB/s 3618.44 IOPS, 14.13 MiB/s 3617.60 IOPS, 14.13 MiB/s 00:23:48.012 Latency(us) 00:23:48.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.012 Verification LBA range: start 0x0 length 0x2000 00:23:48.012 TLSTESTn1 : 10.02 3623.21 14.15 0.00 0.00 35271.06 6140.97 36311.80 00:23:48.012 =================================================================================================================== 00:23:48.012 Total : 3623.21 14.15 0.00 0.00 35271.06 6140.97 36311.80 00:23:48.012 { 00:23:48.012 "results": [ 00:23:48.012 { 00:23:48.012 "job": "TLSTESTn1", 00:23:48.012 "core_mask": "0x4", 00:23:48.012 "workload": "verify", 00:23:48.012 "status": "finished", 00:23:48.012 "verify_range": { 00:23:48.012 "start": 0, 00:23:48.012 "length": 8192 00:23:48.012 }, 00:23:48.012 "queue_depth": 128, 00:23:48.012 "io_size": 4096, 00:23:48.012 "runtime": 10.019019, 00:23:48.012 "iops": 3623.2090187672065, 00:23:48.012 "mibps": 14.1531602295594, 00:23:48.012 "io_failed": 0, 00:23:48.012 "io_timeout": 0, 00:23:48.012 "avg_latency_us": 35271.06087892692, 00:23:48.012 "min_latency_us": 6140.965925925926, 00:23:48.012 "max_latency_us": 36311.79851851852 00:23:48.012 } 00:23:48.012 ], 00:23:48.012 "core_count": 1 00:23:48.012 } 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1567736 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1567736 ']' 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1567736 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1567736 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1567736' 00:23:48.012 killing process with pid 1567736 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1567736 00:23:48.012 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.012 00:23:48.012 Latency(us) 00:23:48.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.012 =================================================================================================================== 00:23:48.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.012 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1567736 00:23:48.270 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y0nrnNOFoR 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y0nrnNOFoR 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y0nrnNOFoR 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y0nrnNOFoR 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1569061 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1569061 /var/tmp/bdevperf.sock 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1569061 ']' 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.271 09:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 [2024-10-07 09:44:42.932323] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:48.271 [2024-10-07 09:44:42.932440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569061 ] 00:23:48.271 [2024-10-07 09:44:43.002604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.529 [2024-10-07 09:44:43.113393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.529 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.529 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:48.787 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y0nrnNOFoR 00:23:49.097 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.388 [2024-10-07 09:44:44.152679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.388 [2024-10-07 09:44:44.158211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.388 [2024-10-07 09:44:44.158768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154b70 (107): Transport endpoint is not connected 00:23:49.388 [2024-10-07 09:44:44.159759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154b70 (9): Bad file descriptor 00:23:49.388 [2024-10-07 09:44:44.160757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.389 [2024-10-07 09:44:44.160777] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.389 [2024-10-07 09:44:44.160805] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:49.389 [2024-10-07 09:44:44.160824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.389 request: 00:23:49.389 { 00:23:49.389 "name": "TLSTEST", 00:23:49.389 "trtype": "tcp", 00:23:49.389 "traddr": "10.0.0.2", 00:23:49.389 "adrfam": "ipv4", 00:23:49.389 "trsvcid": "4420", 00:23:49.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.389 "prchk_reftag": false, 00:23:49.389 "prchk_guard": false, 00:23:49.389 "hdgst": false, 00:23:49.389 "ddgst": false, 00:23:49.389 "psk": "key0", 00:23:49.389 "allow_unrecognized_csi": false, 00:23:49.389 "method": "bdev_nvme_attach_controller", 00:23:49.389 "req_id": 1 00:23:49.389 } 00:23:49.389 Got JSON-RPC error response 00:23:49.389 response: 00:23:49.389 { 00:23:49.389 "code": -5, 00:23:49.389 "message": "Input/output error" 00:23:49.389 } 00:23:49.389 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1569061 00:23:49.389 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1569061 ']' 00:23:49.389 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1569061 00:23:49.389 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.389 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569061 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569061' 00:23:49.674 killing process with pid 1569061 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1569061 00:23:49.674 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.674 00:23:49.674 Latency(us) 00:23:49.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.674 =================================================================================================================== 00:23:49.674 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1569061 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q4gIwy6V06 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q4gIwy6V06 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.q4gIwy6V06 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q4gIwy6V06 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1569208 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1569208 /var/tmp/bdevperf.sock 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1569208 ']' 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.674 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.933 [2024-10-07 09:44:44.514327] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:49.933 [2024-10-07 09:44:44.514421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569208 ] 00:23:49.933 [2024-10-07 09:44:44.578546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.933 [2024-10-07 09:44:44.694301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.194 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.194 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:50.194 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q4gIwy6V06 00:23:50.452 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:50.710 [2024-10-07 09:44:45.397554] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.711 [2024-10-07 09:44:45.408219] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.711 [2024-10-07 09:44:45.408248] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.711 [2024-10-07 09:44:45.408301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:50.711 [2024-10-07 09:44:45.408793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1b70 (107): Transport endpoint is not connected 00:23:50.711 [2024-10-07 09:44:45.409785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1b70 (9): Bad file descriptor 00:23:50.711 [2024-10-07 09:44:45.410784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.711 [2024-10-07 09:44:45.410803] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:50.711 [2024-10-07 09:44:45.410832] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:50.711 [2024-10-07 09:44:45.410850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.711 request: 00:23:50.711 { 00:23:50.711 "name": "TLSTEST", 00:23:50.711 "trtype": "tcp", 00:23:50.711 "traddr": "10.0.0.2", 00:23:50.711 "adrfam": "ipv4", 00:23:50.711 "trsvcid": "4420", 00:23:50.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.711 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.711 "prchk_reftag": false, 00:23:50.711 "prchk_guard": false, 00:23:50.711 "hdgst": false, 00:23:50.711 "ddgst": false, 00:23:50.711 "psk": "key0", 00:23:50.711 "allow_unrecognized_csi": false, 00:23:50.711 "method": "bdev_nvme_attach_controller", 00:23:50.711 "req_id": 1 00:23:50.711 } 00:23:50.711 Got JSON-RPC error response 00:23:50.711 response: 00:23:50.711 { 00:23:50.711 "code": -5, 00:23:50.711 "message": "Input/output error" 00:23:50.711 } 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1569208 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1569208 ']' 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1569208 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569208 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569208' 00:23:50.711 killing process with pid 1569208 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1569208 00:23:50.711 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.711 00:23:50.711 Latency(us) 00:23:50.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.711 =================================================================================================================== 00:23:50.711 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.711 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1569208 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q4gIwy6V06 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q4gIwy6V06 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.q4gIwy6V06 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q4gIwy6V06 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1569356 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1569356 /var/tmp/bdevperf.sock 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1569356 ']' 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.969 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.970 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.970 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.229 [2024-10-07 09:44:45.808702] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:51.229 [2024-10-07 09:44:45.808797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569356 ] 00:23:51.229 [2024-10-07 09:44:45.873763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.229 [2024-10-07 09:44:45.985174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.487 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.487 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.487 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q4gIwy6V06 00:23:51.745 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.004 [2024-10-07 09:44:46.677177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.004 [2024-10-07 09:44:46.682512] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:52.004 [2024-10-07 09:44:46.682541] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:52.004 [2024-10-07 09:44:46.682600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:52.004 [2024-10-07 09:44:46.683219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbab70 (107): Transport endpoint is not connected 00:23:52.004 [2024-10-07 09:44:46.684194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbab70 (9): Bad file descriptor 00:23:52.004 [2024-10-07 09:44:46.685193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:52.004 [2024-10-07 09:44:46.685220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:52.004 [2024-10-07 09:44:46.685234] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:52.004 [2024-10-07 09:44:46.685267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:52.004 request: 00:23:52.004 { 00:23:52.004 "name": "TLSTEST", 00:23:52.004 "trtype": "tcp", 00:23:52.004 "traddr": "10.0.0.2", 00:23:52.004 "adrfam": "ipv4", 00:23:52.004 "trsvcid": "4420", 00:23:52.004 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.004 "prchk_reftag": false, 00:23:52.004 "prchk_guard": false, 00:23:52.004 "hdgst": false, 00:23:52.004 "ddgst": false, 00:23:52.004 "psk": "key0", 00:23:52.004 "allow_unrecognized_csi": false, 00:23:52.004 "method": "bdev_nvme_attach_controller", 00:23:52.004 "req_id": 1 00:23:52.004 } 00:23:52.004 Got JSON-RPC error response 00:23:52.004 response: 00:23:52.004 { 00:23:52.004 "code": -5, 00:23:52.004 "message": "Input/output error" 00:23:52.004 } 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1569356 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1569356 ']' 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1569356 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569356 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569356' 00:23:52.004 killing process with pid 1569356 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1569356 00:23:52.004 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.004 00:23:52.004 Latency(us) 00:23:52.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.004 =================================================================================================================== 00:23:52.004 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.004 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1569356 00:23:52.263 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.263 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:52.263 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.263 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1569495 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1569495 /var/tmp/bdevperf.sock 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1569495 ']' 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.264 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.522 [2024-10-07 09:44:47.083037] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:52.522 [2024-10-07 09:44:47.083132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569495 ] 00:23:52.522 [2024-10-07 09:44:47.150440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.522 [2024-10-07 09:44:47.266806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.779 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.779 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.779 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:53.037 [2024-10-07 09:44:47.772651] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:53.037 [2024-10-07 09:44:47.772692] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:53.037 request: 00:23:53.037 { 00:23:53.037 "name": "key0", 00:23:53.037 "path": "", 00:23:53.037 "method": "keyring_file_add_key", 00:23:53.037 "req_id": 1 00:23:53.037 } 00:23:53.037 Got JSON-RPC error response 00:23:53.037 response: 00:23:53.037 { 00:23:53.037 "code": -1, 00:23:53.037 "message": "Operation not permitted" 00:23:53.037 } 00:23:53.037 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.295 [2024-10-07 09:44:48.097625] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.295 [2024-10-07 09:44:48.097682] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:53.295 request: 00:23:53.295 { 00:23:53.295 "name": "TLSTEST", 00:23:53.295 "trtype": "tcp", 00:23:53.295 "traddr": "10.0.0.2", 00:23:53.295 "adrfam": "ipv4", 00:23:53.295 "trsvcid": "4420", 00:23:53.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.295 "prchk_reftag": false, 00:23:53.295 "prchk_guard": false, 00:23:53.295 "hdgst": false, 00:23:53.295 "ddgst": false, 00:23:53.295 "psk": "key0", 00:23:53.295 "allow_unrecognized_csi": false, 00:23:53.295 "method": "bdev_nvme_attach_controller", 00:23:53.295 "req_id": 1 00:23:53.295 } 00:23:53.295 Got JSON-RPC error response 00:23:53.295 response: 00:23:53.295 { 00:23:53.295 "code": -126, 00:23:53.295 "message": "Required key not available" 00:23:53.295 } 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1569495 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1569495 ']' 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1569495 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569495 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569495' 00:23:53.554 killing process with pid 1569495 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1569495 00:23:53.554 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.554 00:23:53.554 Latency(us) 00:23:53.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.554 =================================================================================================================== 00:23:53.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.554 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1569495 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1565329 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1565329 ']' 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1565329 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1565329 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1565329' 00:23:53.812 killing process with pid 1565329 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1565329 00:23:53.812 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1565329 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zxUnf0GL3m 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zxUnf0GL3m 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1569773 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.380 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1569773 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1569773 ']' 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.380 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.380 [2024-10-07 09:44:49.065486] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:54.380 [2024-10-07 09:44:49.065608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.380 [2024-10-07 09:44:49.163873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.639 [2024-10-07 09:44:49.340050] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.639 [2024-10-07 09:44:49.340123] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.639 [2024-10-07 09:44:49.340140] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.639 [2024-10-07 09:44:49.340153] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.639 [2024-10-07 09:44:49.340174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.639 [2024-10-07 09:44:49.341312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zxUnf0GL3m 00:23:55.574 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.833 [2024-10-07 09:44:50.456711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.833 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.093 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:56.352 [2024-10-07 09:44:51.102850] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.352 [2024-10-07 09:44:51.103206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.352 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.919 malloc0 00:23:56.919 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:57.178 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:23:57.436 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zxUnf0GL3m 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zxUnf0GL3m 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1570193 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1570193 /var/tmp/bdevperf.sock 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1570193 ']' 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.694 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.953 [2024-10-07 09:44:52.522696] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:23:57.953 [2024-10-07 09:44:52.522803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570193 ] 00:23:57.953 [2024-10-07 09:44:52.592966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.953 [2024-10-07 09:44:52.717648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.212 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.212 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:58.212 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:23:59.147 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.406 [2024-10-07 09:44:53.970420] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.406 TLSTESTn1 00:23:59.406 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:59.664 Running I/O for 10 seconds... 00:24:09.880 3428.00 IOPS, 13.39 MiB/s 3476.00 IOPS, 13.58 MiB/s 3450.67 IOPS, 13.48 MiB/s 3467.00 IOPS, 13.54 MiB/s 3467.80 IOPS, 13.55 MiB/s 3485.83 IOPS, 13.62 MiB/s 3468.57 IOPS, 13.55 MiB/s 3485.12 IOPS, 13.61 MiB/s 3486.22 IOPS, 13.62 MiB/s 3499.60 IOPS, 13.67 MiB/s 00:24:09.880 Latency(us) 00:24:09.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.880 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:09.880 Verification LBA range: start 0x0 length 0x2000 00:24:09.880 TLSTESTn1 : 10.02 3505.75 13.69 0.00 0.00 36449.52 5873.97 36117.62 00:24:09.880 =================================================================================================================== 00:24:09.880 Total : 3505.75 13.69 0.00 0.00 36449.52 5873.97 36117.62 00:24:09.880 { 00:24:09.880 "results": [ 00:24:09.880 { 00:24:09.880 "job": "TLSTESTn1", 00:24:09.880 "core_mask": "0x4", 00:24:09.880 "workload": "verify", 00:24:09.880 "status": "finished", 00:24:09.880 "verify_range": { 00:24:09.880 "start": 0, 00:24:09.880 "length": 8192 00:24:09.880 }, 00:24:09.880 "queue_depth": 128, 00:24:09.880 "io_size": 4096, 00:24:09.880 "runtime": 10.018112, 00:24:09.880 "iops": 3505.750384902864, 00:24:09.880 "mibps": 13.694337441026812, 00:24:09.880 "io_failed": 0, 00:24:09.880 "io_timeout": 0, 00:24:09.880 "avg_latency_us": 36449.51978934203, 00:24:09.880 "min_latency_us": 5873.967407407407, 00:24:09.880 "max_latency_us": 36117.61777777778 00:24:09.880 } 00:24:09.880 ], 00:24:09.880 "core_count": 1 00:24:09.880 } 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1570193 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1570193 ']' 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1570193 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1570193 00:24:09.880 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1570193' 00:24:09.881 killing process with pid 1570193 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1570193 00:24:09.881 Received shutdown signal, test time was about 10.000000 seconds 00:24:09.881 00:24:09.881 Latency(us) 00:24:09.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.881 =================================================================================================================== 00:24:09.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1570193 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zxUnf0GL3m 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zxUnf0GL3m 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zxUnf0GL3m 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zxUnf0GL3m 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zxUnf0GL3m 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1571736 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1571736 /var/tmp/bdevperf.sock 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1571736 ']' 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.881 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.881 [2024-10-07 09:45:04.676559] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:09.881 [2024-10-07 09:45:04.676734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571736 ] 00:24:10.139 [2024-10-07 09:45:04.782067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.139 [2024-10-07 09:45:04.898642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.072 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.072 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:11.072 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:11.330 [2024-10-07 09:45:05.988157] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zxUnf0GL3m': 0100666 00:24:11.330 [2024-10-07 09:45:05.988208] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:11.330 request: 00:24:11.330 { 00:24:11.330 "name": "key0", 00:24:11.330 "path": "/tmp/tmp.zxUnf0GL3m", 00:24:11.330 "method": "keyring_file_add_key", 00:24:11.330 "req_id": 1 00:24:11.330 } 00:24:11.330 Got JSON-RPC error response 00:24:11.330 response: 00:24:11.330 { 00:24:11.330 "code": -1, 00:24:11.330 "message": "Operation not permitted" 00:24:11.330 } 00:24:11.330 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.895 [2024-10-07 09:45:06.485573] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.895 [2024-10-07 09:45:06.485658] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:11.895 request: 00:24:11.895 { 00:24:11.895 "name": "TLSTEST", 00:24:11.895 "trtype": "tcp", 00:24:11.895 "traddr": "10.0.0.2", 00:24:11.895 "adrfam": "ipv4", 00:24:11.895 "trsvcid": "4420", 00:24:11.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.895 "prchk_reftag": false, 00:24:11.895 "prchk_guard": false, 00:24:11.895 "hdgst": false, 00:24:11.895 "ddgst": false, 00:24:11.895 "psk": "key0", 00:24:11.895 "allow_unrecognized_csi": false, 00:24:11.895 "method": "bdev_nvme_attach_controller", 00:24:11.895 "req_id": 1 00:24:11.895 } 00:24:11.895 Got JSON-RPC error response 00:24:11.895 response: 00:24:11.895 { 00:24:11.895 "code": -126, 00:24:11.895 "message": "Required key not available" 00:24:11.895 } 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1571736 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1571736 ']' 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1571736 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1571736 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1571736' 00:24:11.895 killing process with pid 1571736 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1571736 00:24:11.895 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.895 00:24:11.895 Latency(us) 00:24:11.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.895 =================================================================================================================== 00:24:11.895 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.895 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1571736 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1569773 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1569773 ']' 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1569773 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569773 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569773' 00:24:12.152 killing process with pid 1569773 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1569773 00:24:12.152 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1569773 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1572032 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1572032 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1572032 ']' 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.720 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.720 [2024-10-07 09:45:07.382171] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:12.720 [2024-10-07 09:45:07.382366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.720 [2024-10-07 09:45:07.490202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.978 [2024-10-07 09:45:07.603486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.978 [2024-10-07 09:45:07.603548] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.978 [2024-10-07 09:45:07.603575] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.978 [2024-10-07 09:45:07.603586] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.978 [2024-10-07 09:45:07.603595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.978 [2024-10-07 09:45:07.604285] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zxUnf0GL3m 00:24:12.978 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.546 [2024-10-07 09:45:08.194879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.546 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:13.804 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:14.062 [2024-10-07 09:45:08.772453] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.062 [2024-10-07 09:45:08.772982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.062 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.320 malloc0 00:24:14.321 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.887 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:15.147 [2024-10-07 09:45:09.863763] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zxUnf0GL3m': 0100666 00:24:15.147 [2024-10-07 09:45:09.863851] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:15.147 request: 00:24:15.147 { 00:24:15.147 "name": "key0", 00:24:15.147 "path": "/tmp/tmp.zxUnf0GL3m", 00:24:15.147 "method": "keyring_file_add_key", 00:24:15.147 "req_id": 1 00:24:15.147 } 00:24:15.147 Got JSON-RPC error response 00:24:15.147 response: 00:24:15.147 { 00:24:15.147 "code": -1, 00:24:15.147 "message": "Operation not permitted" 00:24:15.147 } 00:24:15.147 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.406 [2024-10-07 09:45:10.196879] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:15.406 [2024-10-07 09:45:10.197011] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:15.406 request: 00:24:15.406 { 00:24:15.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.406 "host": "nqn.2016-06.io.spdk:host1", 00:24:15.406 "psk": "key0", 00:24:15.406 "method": "nvmf_subsystem_add_host", 00:24:15.406 "req_id": 1 00:24:15.406 } 00:24:15.406 Got JSON-RPC error response 00:24:15.406 response: 00:24:15.406 { 00:24:15.406 "code": -32603, 00:24:15.406 "message": "Internal error" 00:24:15.406 } 00:24:15.406 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:15.406 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.406 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.406 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.406 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1572032 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1572032 ']' 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1572032 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572032 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572032' 00:24:15.665 killing process with pid 1572032 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1572032 00:24:15.665 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1572032 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zxUnf0GL3m 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1572963 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1572963 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1572963 ']' 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.924 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.924 [2024-10-07 09:45:10.730030] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:15.924 [2024-10-07 09:45:10.730131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.183 [2024-10-07 09:45:10.838561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.443 [2024-10-07 09:45:11.034200] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.443 [2024-10-07 09:45:11.034290] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.443 [2024-10-07 09:45:11.034326] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.443 [2024-10-07 09:45:11.034356] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.443 [2024-10-07 09:45:11.034382] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.443 [2024-10-07 09:45:11.035281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zxUnf0GL3m 00:24:16.443 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:17.011 [2024-10-07 09:45:11.802093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.011 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.577 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:17.836 [2024-10-07 09:45:12.460039] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.836 [2024-10-07 09:45:12.460540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.836 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:18.096 malloc0 00:24:18.096 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:18.663 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:18.922 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1573280 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1573280 /var/tmp/bdevperf.sock 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1573280 ']' 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.181 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.181 [2024-10-07 09:45:13.877022] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:19.181 [2024-10-07 09:45:13.877105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573280 ] 00:24:19.181 [2024-10-07 09:45:13.939496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.446 [2024-10-07 09:45:14.055391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.446 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.446 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:19.446 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:19.753 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.036 [2024-10-07 09:45:14.807168] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.294 TLSTESTn1 00:24:20.294 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:20.859 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:20.859 "subsystems": [ 00:24:20.859 { 00:24:20.859 "subsystem": "keyring", 00:24:20.859 "config": [ 00:24:20.859 { 00:24:20.859 "method": "keyring_file_add_key", 00:24:20.859 "params": { 00:24:20.859 "name": "key0", 00:24:20.860 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:20.860 } 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "iobuf", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "iobuf_set_options", 00:24:20.860 "params": { 00:24:20.860 "small_pool_count": 8192, 00:24:20.860 "large_pool_count": 1024, 00:24:20.860 "small_bufsize": 8192, 00:24:20.860 "large_bufsize": 135168 00:24:20.860 } 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "sock", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "sock_set_default_impl", 00:24:20.860 "params": { 00:24:20.860 "impl_name": "posix" 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "sock_impl_set_options", 00:24:20.860 "params": { 00:24:20.860 "impl_name": "ssl", 00:24:20.860 "recv_buf_size": 4096, 00:24:20.860 "send_buf_size": 4096, 00:24:20.860 "enable_recv_pipe": true, 00:24:20.860 "enable_quickack": false, 00:24:20.860 "enable_placement_id": 0, 00:24:20.860 "enable_zerocopy_send_server": true, 00:24:20.860 "enable_zerocopy_send_client": false, 00:24:20.860 "zerocopy_threshold": 0, 00:24:20.860 "tls_version": 0, 00:24:20.860 "enable_ktls": false 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "sock_impl_set_options", 00:24:20.860 "params": { 00:24:20.860 "impl_name": "posix", 00:24:20.860 "recv_buf_size": 2097152, 00:24:20.860 "send_buf_size": 2097152, 00:24:20.860 "enable_recv_pipe": true, 00:24:20.860 "enable_quickack": false, 00:24:20.860 "enable_placement_id": 0, 00:24:20.860 "enable_zerocopy_send_server": true, 00:24:20.860 "enable_zerocopy_send_client": false, 00:24:20.860 "zerocopy_threshold": 0, 00:24:20.860 "tls_version": 0, 00:24:20.860 "enable_ktls": false 00:24:20.860 } 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "vmd", 00:24:20.860 "config": [] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "accel", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "accel_set_options", 00:24:20.860 "params": { 00:24:20.860 "small_cache_size": 128, 00:24:20.860 "large_cache_size": 16, 00:24:20.860 "task_count": 2048, 00:24:20.860 "sequence_count": 2048, 00:24:20.860 "buf_count": 2048 00:24:20.860 } 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "bdev", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "bdev_set_options", 00:24:20.860 "params": { 00:24:20.860 "bdev_io_pool_size": 65535, 00:24:20.860 "bdev_io_cache_size": 256, 00:24:20.860 "bdev_auto_examine": true, 00:24:20.860 "iobuf_small_cache_size": 128, 00:24:20.860 "iobuf_large_cache_size": 16 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_raid_set_options", 00:24:20.860 "params": { 00:24:20.860 "process_window_size_kb": 1024, 00:24:20.860 "process_max_bandwidth_mb_sec": 0 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_iscsi_set_options", 00:24:20.860 "params": { 00:24:20.860 "timeout_sec": 30 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_nvme_set_options", 00:24:20.860 "params": { 00:24:20.860 "action_on_timeout": "none", 00:24:20.860 "timeout_us": 0, 00:24:20.860 "timeout_admin_us": 0, 00:24:20.860 "keep_alive_timeout_ms": 10000, 00:24:20.860 "arbitration_burst": 0, 00:24:20.860 "low_priority_weight": 0, 00:24:20.860 "medium_priority_weight": 0, 00:24:20.860 "high_priority_weight": 0, 00:24:20.860 "nvme_adminq_poll_period_us": 10000, 00:24:20.860 "nvme_ioq_poll_period_us": 0, 00:24:20.860 "io_queue_requests": 0, 00:24:20.860 "delay_cmd_submit": true, 00:24:20.860 "transport_retry_count": 4, 00:24:20.860 "bdev_retry_count": 3, 00:24:20.860 "transport_ack_timeout": 0, 00:24:20.860 "ctrlr_loss_timeout_sec": 0, 00:24:20.860 "reconnect_delay_sec": 0, 00:24:20.860 "fast_io_fail_timeout_sec": 0, 00:24:20.860 "disable_auto_failback": false, 00:24:20.860 "generate_uuids": false, 00:24:20.860 "transport_tos": 0, 00:24:20.860 "nvme_error_stat": false, 00:24:20.860 "rdma_srq_size": 0, 00:24:20.860 "io_path_stat": false, 00:24:20.860 "allow_accel_sequence": false, 00:24:20.860 "rdma_max_cq_size": 0, 00:24:20.860 "rdma_cm_event_timeout_ms": 0, 00:24:20.860 "dhchap_digests": [ 00:24:20.860 "sha256", 00:24:20.860 "sha384", 00:24:20.860 "sha512" 00:24:20.860 ], 00:24:20.860 "dhchap_dhgroups": [ 00:24:20.860 "null", 00:24:20.860 "ffdhe2048", 00:24:20.860 "ffdhe3072", 00:24:20.860 "ffdhe4096", 00:24:20.860 "ffdhe6144", 00:24:20.860 "ffdhe8192" 00:24:20.860 ] 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_nvme_set_hotplug", 00:24:20.860 "params": { 00:24:20.860 "period_us": 100000, 00:24:20.860 "enable": false 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_malloc_create", 00:24:20.860 "params": { 00:24:20.860 "name": "malloc0", 00:24:20.860 "num_blocks": 8192, 00:24:20.860 "block_size": 4096, 00:24:20.860 "physical_block_size": 4096, 00:24:20.860 "uuid": "612b5992-0c41-4e35-86e2-fe0a911cd627", 00:24:20.860 "optimal_io_boundary": 0, 00:24:20.860 "md_size": 0, 00:24:20.860 "dif_type": 0, 00:24:20.860 "dif_is_head_of_md": false, 00:24:20.860 "dif_pi_format": 0 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "bdev_wait_for_examine" 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "nbd", 00:24:20.860 "config": [] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "scheduler", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "framework_set_scheduler", 00:24:20.860 "params": { 00:24:20.860 "name": "static" 00:24:20.860 } 00:24:20.860 } 00:24:20.860 ] 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "subsystem": "nvmf", 00:24:20.860 "config": [ 00:24:20.860 { 00:24:20.860 "method": "nvmf_set_config", 00:24:20.860 "params": { 00:24:20.860 "discovery_filter": "match_any", 00:24:20.860 "admin_cmd_passthru": { 00:24:20.860 "identify_ctrlr": false 00:24:20.860 }, 00:24:20.860 "dhchap_digests": [ 00:24:20.860 "sha256", 00:24:20.860 "sha384", 00:24:20.860 "sha512" 00:24:20.860 ], 00:24:20.860 "dhchap_dhgroups": [ 00:24:20.860 "null", 00:24:20.860 "ffdhe2048", 00:24:20.860 "ffdhe3072", 00:24:20.860 "ffdhe4096", 00:24:20.860 "ffdhe6144", 00:24:20.860 "ffdhe8192" 00:24:20.860 ] 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "nvmf_set_max_subsystems", 00:24:20.860 "params": { 00:24:20.860 "max_subsystems": 1024 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "nvmf_set_crdt", 00:24:20.860 "params": { 00:24:20.860 "crdt1": 0, 00:24:20.860 "crdt2": 0, 00:24:20.860 "crdt3": 0 00:24:20.860 } 00:24:20.860 }, 00:24:20.860 { 00:24:20.860 "method": "nvmf_create_transport", 00:24:20.860 "params": { 00:24:20.860 "trtype": "TCP", 00:24:20.860 "max_queue_depth": 128, 00:24:20.860 "max_io_qpairs_per_ctrlr": 127, 00:24:20.860 "in_capsule_data_size": 4096, 00:24:20.860 "max_io_size": 131072, 00:24:20.860 "io_unit_size": 131072, 00:24:20.861 "max_aq_depth": 128, 00:24:20.861 "num_shared_buffers": 511, 00:24:20.861 "buf_cache_size": 4294967295, 00:24:20.861 "dif_insert_or_strip": false, 00:24:20.861 "zcopy": false, 00:24:20.861 "c2h_success": false, 00:24:20.861 "sock_priority": 0, 00:24:20.861 "abort_timeout_sec": 1, 00:24:20.861 "ack_timeout": 0, 00:24:20.861 "data_wr_pool_size": 0 00:24:20.861 } 00:24:20.861 }, 00:24:20.861 { 00:24:20.861 "method": "nvmf_create_subsystem", 00:24:20.861 "params": { 00:24:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.861 "allow_any_host": false, 00:24:20.861 "serial_number": "SPDK00000000000001", 00:24:20.861 "model_number": "SPDK bdev Controller", 00:24:20.861 "max_namespaces": 10, 00:24:20.861 "min_cntlid": 1, 00:24:20.861 "max_cntlid": 65519, 00:24:20.861 "ana_reporting": false 00:24:20.861 } 00:24:20.861 }, 00:24:20.861 { 00:24:20.861 "method": "nvmf_subsystem_add_host", 00:24:20.861 "params": { 00:24:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.861 "host": "nqn.2016-06.io.spdk:host1", 00:24:20.861 "psk": "key0" 00:24:20.861 } 00:24:20.861 }, 00:24:20.861 { 00:24:20.861 "method": "nvmf_subsystem_add_ns", 00:24:20.861 "params": { 00:24:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.861 "namespace": { 00:24:20.861 "nsid": 1, 00:24:20.861 "bdev_name": "malloc0", 00:24:20.861 "nguid": "612B59920C414E3586E2FE0A911CD627", 00:24:20.861 "uuid": "612b5992-0c41-4e35-86e2-fe0a911cd627", 00:24:20.861 "no_auto_visible": false 00:24:20.861 } 00:24:20.861 } 00:24:20.861 }, 00:24:20.861 { 00:24:20.861 "method": "nvmf_subsystem_add_listener", 00:24:20.861 "params": { 00:24:20.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.861 "listen_address": { 00:24:20.861 "trtype": "TCP", 00:24:20.861 "adrfam": "IPv4", 00:24:20.861 "traddr": "10.0.0.2", 00:24:20.861 "trsvcid": "4420" 00:24:20.861 }, 00:24:20.861 "secure_channel": true 00:24:20.861 } 00:24:20.861 } 00:24:20.861 ] 00:24:20.861 } 00:24:20.861 ] 00:24:20.861 }' 00:24:20.861 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:21.119 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:21.119 "subsystems": [ 00:24:21.119 { 00:24:21.119 "subsystem": "keyring", 00:24:21.119 "config": [ 00:24:21.119 { 00:24:21.119 "method": "keyring_file_add_key", 00:24:21.119 "params": { 00:24:21.119 "name": "key0", 00:24:21.119 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:21.119 } 00:24:21.119 } 00:24:21.119 ] 00:24:21.119 }, 00:24:21.119 { 00:24:21.119 "subsystem": "iobuf", 00:24:21.119 "config": [ 00:24:21.119 { 00:24:21.119 "method": "iobuf_set_options", 00:24:21.119 "params": { 00:24:21.119 "small_pool_count": 8192, 00:24:21.119 "large_pool_count": 1024, 00:24:21.119 "small_bufsize": 8192, 00:24:21.119 "large_bufsize": 135168 00:24:21.119 } 00:24:21.119 } 00:24:21.119 ] 00:24:21.119 }, 00:24:21.119 { 00:24:21.119 "subsystem": "sock", 00:24:21.119 "config": [ 00:24:21.119 { 00:24:21.119 "method": "sock_set_default_impl", 00:24:21.119 "params": { 00:24:21.119 "impl_name": "posix" 00:24:21.119 } 00:24:21.119 }, 00:24:21.119 { 00:24:21.119 "method": "sock_impl_set_options", 00:24:21.119 "params": { 00:24:21.119 "impl_name": "ssl", 00:24:21.119 "recv_buf_size": 4096, 00:24:21.119 "send_buf_size": 4096, 00:24:21.119 "enable_recv_pipe": true, 00:24:21.119 "enable_quickack": false, 00:24:21.119 "enable_placement_id": 0, 00:24:21.119 "enable_zerocopy_send_server": true, 00:24:21.119 "enable_zerocopy_send_client": false, 00:24:21.119 "zerocopy_threshold": 0, 00:24:21.119 "tls_version": 0, 00:24:21.119 "enable_ktls": false 00:24:21.119 } 00:24:21.119 }, 00:24:21.119 { 00:24:21.119 "method": "sock_impl_set_options", 00:24:21.119 "params": { 00:24:21.119 "impl_name": "posix", 00:24:21.119 "recv_buf_size": 2097152, 00:24:21.119 "send_buf_size": 2097152, 00:24:21.120 "enable_recv_pipe": true, 00:24:21.120 "enable_quickack": false, 00:24:21.120 "enable_placement_id": 0, 00:24:21.120 "enable_zerocopy_send_server": true, 00:24:21.120 "enable_zerocopy_send_client": false, 00:24:21.120 "zerocopy_threshold": 0, 00:24:21.120 "tls_version": 0, 00:24:21.120 "enable_ktls": false 00:24:21.120 } 00:24:21.120 } 00:24:21.120 ] 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "subsystem": "vmd", 00:24:21.120 "config": [] 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "subsystem": "accel", 00:24:21.120 "config": [ 00:24:21.120 { 00:24:21.120 "method": "accel_set_options", 00:24:21.120 "params": { 00:24:21.120 "small_cache_size": 128, 00:24:21.120 "large_cache_size": 16, 00:24:21.120 "task_count": 2048, 00:24:21.120 "sequence_count": 2048, 00:24:21.120 "buf_count": 2048 00:24:21.120 } 00:24:21.120 } 00:24:21.120 ] 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "subsystem": "bdev", 00:24:21.120 "config": [ 00:24:21.120 { 00:24:21.120 "method": "bdev_set_options", 00:24:21.120 "params": { 00:24:21.120 "bdev_io_pool_size": 65535, 00:24:21.120 "bdev_io_cache_size": 256, 00:24:21.120 "bdev_auto_examine": true, 00:24:21.120 "iobuf_small_cache_size": 128, 00:24:21.120 "iobuf_large_cache_size": 16 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_raid_set_options", 00:24:21.120 "params": { 00:24:21.120 "process_window_size_kb": 1024, 00:24:21.120 "process_max_bandwidth_mb_sec": 0 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_iscsi_set_options", 00:24:21.120 "params": { 00:24:21.120 "timeout_sec": 30 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_nvme_set_options", 00:24:21.120 "params": { 00:24:21.120 "action_on_timeout": "none", 00:24:21.120 "timeout_us": 0, 00:24:21.120 "timeout_admin_us": 0, 00:24:21.120 "keep_alive_timeout_ms": 10000, 00:24:21.120 "arbitration_burst": 0, 00:24:21.120 "low_priority_weight": 0, 00:24:21.120 "medium_priority_weight": 0, 00:24:21.120 "high_priority_weight": 0, 00:24:21.120 "nvme_adminq_poll_period_us": 10000, 00:24:21.120 "nvme_ioq_poll_period_us": 0, 00:24:21.120 "io_queue_requests": 512, 00:24:21.120 "delay_cmd_submit": true, 00:24:21.120 "transport_retry_count": 4, 00:24:21.120 "bdev_retry_count": 3, 00:24:21.120 "transport_ack_timeout": 0, 00:24:21.120 "ctrlr_loss_timeout_sec": 0, 00:24:21.120 "reconnect_delay_sec": 0, 00:24:21.120 "fast_io_fail_timeout_sec": 0, 00:24:21.120 "disable_auto_failback": false, 00:24:21.120 "generate_uuids": false, 00:24:21.120 "transport_tos": 0, 00:24:21.120 "nvme_error_stat": false, 00:24:21.120 "rdma_srq_size": 0, 00:24:21.120 "io_path_stat": false, 00:24:21.120 "allow_accel_sequence": false, 00:24:21.120 "rdma_max_cq_size": 0, 00:24:21.120 "rdma_cm_event_timeout_ms": 0, 00:24:21.120 "dhchap_digests": [ 00:24:21.120 "sha256", 00:24:21.120 "sha384", 00:24:21.120 "sha512" 00:24:21.120 ], 00:24:21.120 "dhchap_dhgroups": [ 00:24:21.120 "null", 00:24:21.120 "ffdhe2048", 00:24:21.120 "ffdhe3072", 00:24:21.120 "ffdhe4096", 00:24:21.120 "ffdhe6144", 00:24:21.120 "ffdhe8192" 00:24:21.120 ] 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_nvme_attach_controller", 00:24:21.120 "params": { 00:24:21.120 "name": "TLSTEST", 00:24:21.120 "trtype": "TCP", 00:24:21.120 "adrfam": "IPv4", 00:24:21.120 "traddr": "10.0.0.2", 00:24:21.120 "trsvcid": "4420", 00:24:21.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.120 "prchk_reftag": false, 00:24:21.120 "prchk_guard": false, 00:24:21.120 "ctrlr_loss_timeout_sec": 0, 00:24:21.120 "reconnect_delay_sec": 0, 00:24:21.120 "fast_io_fail_timeout_sec": 0, 00:24:21.120 "psk": "key0", 00:24:21.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.120 "hdgst": false, 00:24:21.120 "ddgst": false 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_nvme_set_hotplug", 00:24:21.120 "params": { 00:24:21.120 "period_us": 100000, 00:24:21.120 "enable": false 00:24:21.120 } 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "method": "bdev_wait_for_examine" 00:24:21.120 } 00:24:21.120 ] 00:24:21.120 }, 00:24:21.120 { 00:24:21.120 "subsystem": "nbd", 00:24:21.120 "config": [] 00:24:21.120 } 00:24:21.120 ] 00:24:21.120 }' 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1573280 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1573280 ']' 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1573280 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.120 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1573280 00:24:21.378 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:21.378 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:21.378 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1573280' 00:24:21.378 killing process with pid 1573280 00:24:21.378 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1573280 00:24:21.378 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.378 00:24:21.378 Latency(us) 00:24:21.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.378 =================================================================================================================== 00:24:21.378 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.378 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1573280 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1572963 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1572963 ']' 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1572963 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572963 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572963' 00:24:21.638 killing process with pid 1572963 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1572963 00:24:21.638 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1572963 00:24:21.897 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:21.897 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:21.897 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.897 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:21.897 "subsystems": [ 00:24:21.897 { 00:24:21.897 "subsystem": "keyring", 00:24:21.897 "config": [ 00:24:21.897 { 00:24:21.897 "method": "keyring_file_add_key", 00:24:21.897 "params": { 00:24:21.897 "name": "key0", 00:24:21.897 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:21.897 } 00:24:21.897 } 00:24:21.897 ] 00:24:21.897 }, 00:24:21.897 { 00:24:21.897 "subsystem": "iobuf", 00:24:21.897 "config": [ 00:24:21.897 { 00:24:21.897 "method": "iobuf_set_options", 00:24:21.897 "params": { 00:24:21.897 "small_pool_count": 8192, 00:24:21.897 "large_pool_count": 1024, 00:24:21.897 "small_bufsize": 8192, 00:24:21.897 "large_bufsize": 135168 00:24:21.897 } 00:24:21.897 } 00:24:21.897 ] 00:24:21.897 }, 00:24:21.897 { 00:24:21.897 "subsystem": "sock", 00:24:21.897 "config": [ 00:24:21.897 { 00:24:21.897 "method": "sock_set_default_impl", 00:24:21.897 "params": { 00:24:21.897 "impl_name": "posix" 00:24:21.897 } 00:24:21.897 }, 00:24:21.897 { 00:24:21.897 "method": "sock_impl_set_options", 00:24:21.897 "params": { 00:24:21.897 "impl_name": "ssl", 00:24:21.897 "recv_buf_size": 4096, 00:24:21.897 "send_buf_size": 4096, 00:24:21.897 "enable_recv_pipe": true, 00:24:21.897 "enable_quickack": false, 00:24:21.897 "enable_placement_id": 0, 00:24:21.897 "enable_zerocopy_send_server": true, 00:24:21.897 "enable_zerocopy_send_client": false, 00:24:21.897 "zerocopy_threshold": 0, 00:24:21.897 "tls_version": 0, 00:24:21.897 "enable_ktls": false 00:24:21.897 } 00:24:21.897 }, 00:24:21.897 { 00:24:21.897 "method": "sock_impl_set_options", 00:24:21.897 "params": { 00:24:21.897 "impl_name": "posix", 00:24:21.897 "recv_buf_size": 2097152, 00:24:21.897 "send_buf_size": 2097152, 00:24:21.897 "enable_recv_pipe": true, 00:24:21.897 "enable_quickack": false, 00:24:21.897 "enable_placement_id": 0, 00:24:21.897 "enable_zerocopy_send_server": true, 00:24:21.898 "enable_zerocopy_send_client": false, 00:24:21.898 "zerocopy_threshold": 0, 00:24:21.898 "tls_version": 0, 00:24:21.898 "enable_ktls": false 00:24:21.898 } 00:24:21.898 } 00:24:21.898 ] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "vmd", 00:24:21.898 "config": [] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "accel", 00:24:21.898 "config": [ 00:24:21.898 { 00:24:21.898 "method": "accel_set_options", 00:24:21.898 "params": { 00:24:21.898 "small_cache_size": 128, 00:24:21.898 "large_cache_size": 16, 00:24:21.898 "task_count": 2048, 00:24:21.898 "sequence_count": 2048, 00:24:21.898 "buf_count": 2048 00:24:21.898 } 00:24:21.898 } 00:24:21.898 ] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "bdev", 00:24:21.898 "config": [ 00:24:21.898 { 00:24:21.898 "method": "bdev_set_options", 00:24:21.898 "params": { 00:24:21.898 "bdev_io_pool_size": 65535, 00:24:21.898 "bdev_io_cache_size": 256, 00:24:21.898 "bdev_auto_examine": true, 00:24:21.898 "iobuf_small_cache_size": 128, 00:24:21.898 "iobuf_large_cache_size": 16 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_raid_set_options", 00:24:21.898 "params": { 00:24:21.898 "process_window_size_kb": 1024, 00:24:21.898 "process_max_bandwidth_mb_sec": 0 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_iscsi_set_options", 00:24:21.898 "params": { 00:24:21.898 "timeout_sec": 30 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_nvme_set_options", 00:24:21.898 "params": { 00:24:21.898 "action_on_timeout": "none", 00:24:21.898 "timeout_us": 0, 00:24:21.898 "timeout_admin_us": 0, 00:24:21.898 "keep_alive_timeout_ms": 10000, 00:24:21.898 "arbitration_burst": 0, 00:24:21.898 "low_priority_weight": 0, 00:24:21.898 "medium_priority_weight": 0, 00:24:21.898 "high_priority_weight": 0, 00:24:21.898 "nvme_adminq_poll_period_us": 10000, 00:24:21.898 "nvme_ioq_poll_period_us": 0, 00:24:21.898 "io_queue_requests": 0, 00:24:21.898 "delay_cmd_submit": true, 00:24:21.898 "transport_retry_count": 4, 00:24:21.898 "bdev_retry_count": 3, 00:24:21.898 "transport_ack_timeout": 0, 00:24:21.898 "ctrlr_loss_timeout_sec": 0, 00:24:21.898 "reconnect_delay_sec": 0, 00:24:21.898 "fast_io_fail_timeout_sec": 0, 00:24:21.898 "disable_auto_failback": false, 00:24:21.898 "generate_uuids": false, 00:24:21.898 "transport_tos": 0, 00:24:21.898 "nvme_error_stat": false, 00:24:21.898 "rdma_srq_size": 0, 00:24:21.898 "io_path_stat": false, 00:24:21.898 "allow_accel_sequence": false, 00:24:21.898 "rdma_max_cq_size": 0, 00:24:21.898 "rdma_cm_event_timeout_ms": 0, 00:24:21.898 "dhchap_digests": [ 00:24:21.898 "sha256", 00:24:21.898 "sha384", 00:24:21.898 "sha512" 00:24:21.898 ], 00:24:21.898 "dhchap_dhgroups": [ 00:24:21.898 "null", 00:24:21.898 "ffdhe2048", 00:24:21.898 "ffdhe3072", 00:24:21.898 "ffdhe4096", 00:24:21.898 "ffdhe6144", 00:24:21.898 "ffdhe8192" 00:24:21.898 ] 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_nvme_set_hotplug", 00:24:21.898 "params": { 00:24:21.898 "period_us": 100000, 00:24:21.898 "enable": false 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_malloc_create", 00:24:21.898 "params": { 00:24:21.898 "name": "malloc0", 00:24:21.898 "num_blocks": 8192, 00:24:21.898 "block_size": 4096, 00:24:21.898 "physical_block_size": 4096, 00:24:21.898 "uuid": "612b5992-0c41-4e35-86e2-fe0a911cd627", 00:24:21.898 "optimal_io_boundary": 0, 00:24:21.898 "md_size": 0, 00:24:21.898 "dif_type": 0, 00:24:21.898 "dif_is_head_of_md": false, 00:24:21.898 "dif_pi_format": 0 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "bdev_wait_for_examine" 00:24:21.898 } 00:24:21.898 ] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "nbd", 00:24:21.898 "config": [] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "scheduler", 00:24:21.898 "config": [ 00:24:21.898 { 00:24:21.898 "method": "framework_set_scheduler", 00:24:21.898 "params": { 00:24:21.898 "name": "static" 00:24:21.898 } 00:24:21.898 } 00:24:21.898 ] 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "subsystem": "nvmf", 00:24:21.898 "config": [ 00:24:21.898 { 00:24:21.898 "method": "nvmf_set_config", 00:24:21.898 "params": { 00:24:21.898 "discovery_filter": "match_any", 00:24:21.898 "admin_cmd_passthru": { 00:24:21.898 "identify_ctrlr": false 00:24:21.898 }, 00:24:21.898 "dhchap_digests": [ 00:24:21.898 "sha256", 00:24:21.898 "sha384", 00:24:21.898 "sha512" 00:24:21.898 ], 00:24:21.898 "dhchap_dhgroups": [ 00:24:21.898 "null", 00:24:21.898 "ffdhe2048", 00:24:21.898 "ffdhe3072", 00:24:21.898 "ffdhe4096", 00:24:21.898 "ffdhe6144", 00:24:21.898 "ffdhe8192" 00:24:21.898 ] 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "nvmf_set_max_subsystems", 00:24:21.898 "params": { 00:24:21.898 "max_subsystems": 1024 00:24:21.898 } 00:24:21.898 }, 00:24:21.898 { 00:24:21.898 "method": "nvmf_set_crdt", 00:24:21.898 "params": { 00:24:21.898 "crdt1": 0, 00:24:21.898 "crdt2": 0, 00:24:21.898 "crdt3": 0 00:24:21.899 } 00:24:21.899 }, 00:24:21.899 { 00:24:21.899 "method": "nvmf_create_transport", 00:24:21.899 "params": { 00:24:21.899 "trtype": "TCP", 00:24:21.899 "max_queue_depth": 128, 00:24:21.899 "max_io_qpairs_per_ctrlr": 127, 00:24:21.899 "in_capsule_data_size": 4096, 00:24:21.899 "max_io_size": 131072, 00:24:21.899 "io_unit_size": 131072, 00:24:21.899 "max_aq_depth": 128, 00:24:21.899 "num_shared_buffers": 511, 00:24:21.899 "buf_cache_size": 4294967295, 00:24:21.899 "dif_insert_or_strip": false, 00:24:21.899 "zcopy": false, 00:24:21.899 "c2h_success": false, 00:24:21.899 "sock_priority": 0, 00:24:21.899 "abort_timeout_sec": 1, 00:24:21.899 "ack_timeout": 0, 00:24:21.899 "data_wr_pool_size": 0 00:24:21.899 } 00:24:21.899 }, 00:24:21.899 { 00:24:21.899 "method": "nvmf_create_subsystem", 00:24:21.899 "params": { 00:24:21.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.899 "allow_any_host": false, 00:24:21.899 "serial_number": "SPDK00000000000001", 00:24:21.899 "model_number": "SPDK bdev Controller", 00:24:21.899 "max_namespaces": 10, 00:24:21.899 "min_cntlid": 1, 00:24:21.899 "max_cntlid": 65519, 00:24:21.899 "ana_reporting": false 00:24:21.899 } 00:24:21.899 }, 00:24:21.899 { 00:24:21.899 "method": "nvmf_subsystem_add_host", 00:24:21.899 "params": { 00:24:21.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.899 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.899 "psk": "key0" 00:24:21.899 } 00:24:21.899 }, 00:24:21.899 { 00:24:21.899 "method": "nvmf_subsystem_add_ns", 00:24:21.899 "params": { 00:24:21.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.899 "namespace": { 00:24:21.899 "nsid": 1, 00:24:21.899 "bdev_name": "malloc0", 00:24:21.899 "nguid": "612B59920C414E3586E2FE0A911CD627", 00:24:21.899 "uuid": "612b5992-0c41-4e35-86e2-fe0a911cd627", 00:24:21.899 "no_auto_visible": false 00:24:21.899 } 00:24:21.899 } 00:24:21.899 }, 00:24:21.899 { 00:24:21.899 "method": "nvmf_subsystem_add_listener", 00:24:21.899 "params": { 00:24:21.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.899 "listen_address": { 00:24:21.899 "trtype": "TCP", 00:24:21.899 "adrfam": "IPv4", 00:24:21.899 "traddr": "10.0.0.2", 00:24:21.899 "trsvcid": "4420" 00:24:21.899 }, 00:24:21.899 "secure_channel": true 00:24:21.899 } 00:24:21.899 } 00:24:21.899 ] 00:24:21.899 } 00:24:21.899 ] 00:24:21.899 }' 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1573671 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1573671 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1573671 ']' 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.899 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.157 [2024-10-07 09:45:16.751280] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:22.157 [2024-10-07 09:45:16.751382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.157 [2024-10-07 09:45:16.832801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.157 [2024-10-07 09:45:16.956996] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.157 [2024-10-07 09:45:16.957069] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.157 [2024-10-07 09:45:16.957086] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.157 [2024-10-07 09:45:16.957099] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.157 [2024-10-07 09:45:16.957110] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.157 [2024-10-07 09:45:16.957914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.726 [2024-10-07 09:45:17.290972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.726 [2024-10-07 09:45:17.324058] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.726 [2024-10-07 09:45:17.324544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1573822 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1573822 /var/tmp/bdevperf.sock 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1573822 ']' 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.293 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:23.293 "subsystems": [ 00:24:23.293 { 00:24:23.293 "subsystem": "keyring", 00:24:23.293 "config": [ 00:24:23.293 { 00:24:23.293 "method": "keyring_file_add_key", 00:24:23.293 "params": { 00:24:23.293 "name": "key0", 00:24:23.293 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:23.293 } 00:24:23.293 } 00:24:23.293 ] 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "subsystem": "iobuf", 00:24:23.293 "config": [ 00:24:23.293 { 00:24:23.293 "method": "iobuf_set_options", 00:24:23.293 "params": { 00:24:23.293 "small_pool_count": 8192, 00:24:23.293 "large_pool_count": 1024, 00:24:23.293 "small_bufsize": 8192, 00:24:23.293 "large_bufsize": 135168 00:24:23.293 } 00:24:23.293 } 00:24:23.293 ] 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "subsystem": "sock", 00:24:23.293 "config": [ 00:24:23.293 { 00:24:23.293 "method": "sock_set_default_impl", 00:24:23.293 "params": { 00:24:23.293 "impl_name": "posix" 00:24:23.293 } 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "method": "sock_impl_set_options", 00:24:23.293 "params": { 00:24:23.293 "impl_name": "ssl", 00:24:23.293 "recv_buf_size": 4096, 00:24:23.293 "send_buf_size": 4096, 00:24:23.293 "enable_recv_pipe": true, 00:24:23.293 "enable_quickack": false, 00:24:23.293 "enable_placement_id": 0, 00:24:23.293 "enable_zerocopy_send_server": true, 00:24:23.293 "enable_zerocopy_send_client": false, 00:24:23.293 "zerocopy_threshold": 0, 00:24:23.293 "tls_version": 0, 00:24:23.293 "enable_ktls": false 00:24:23.293 } 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "method": "sock_impl_set_options", 00:24:23.293 "params": { 00:24:23.293 "impl_name": "posix", 00:24:23.293 "recv_buf_size": 2097152, 00:24:23.293 "send_buf_size": 2097152, 00:24:23.293 "enable_recv_pipe": true, 00:24:23.293 "enable_quickack": false, 00:24:23.293 "enable_placement_id": 0, 00:24:23.293 "enable_zerocopy_send_server": true, 00:24:23.293 "enable_zerocopy_send_client": false, 00:24:23.293 "zerocopy_threshold": 0, 00:24:23.293 "tls_version": 0, 00:24:23.293 "enable_ktls": false 00:24:23.293 } 00:24:23.293 } 00:24:23.293 ] 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "subsystem": "vmd", 00:24:23.293 "config": [] 00:24:23.293 }, 00:24:23.293 { 00:24:23.293 "subsystem": "accel", 00:24:23.293 "config": [ 00:24:23.293 { 00:24:23.293 "method": "accel_set_options", 00:24:23.293 "params": { 00:24:23.293 "small_cache_size": 128, 00:24:23.293 "large_cache_size": 16, 00:24:23.293 "task_count": 2048, 00:24:23.294 "sequence_count": 2048, 00:24:23.294 "buf_count": 2048 00:24:23.294 } 00:24:23.294 } 00:24:23.294 ] 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "subsystem": "bdev", 00:24:23.294 "config": [ 00:24:23.294 { 00:24:23.294 "method": "bdev_set_options", 00:24:23.294 "params": { 00:24:23.294 "bdev_io_pool_size": 65535, 00:24:23.294 "bdev_io_cache_size": 256, 00:24:23.294 "bdev_auto_examine": true, 00:24:23.294 "iobuf_small_cache_size": 128, 00:24:23.294 "iobuf_large_cache_size": 16 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_raid_set_options", 00:24:23.294 "params": { 00:24:23.294 "process_window_size_kb": 1024, 00:24:23.294 "process_max_bandwidth_mb_sec": 0 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_iscsi_set_options", 00:24:23.294 "params": { 00:24:23.294 "timeout_sec": 30 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_nvme_set_options", 00:24:23.294 "params": { 00:24:23.294 "action_on_timeout": "none", 00:24:23.294 "timeout_us": 0, 00:24:23.294 "timeout_admin_us": 0, 00:24:23.294 "keep_alive_timeout_ms": 10000, 00:24:23.294 "arbitration_burst": 0, 00:24:23.294 "low_priority_weight": 0, 00:24:23.294 "medium_priority_weight": 0, 00:24:23.294 "high_priority_weight": 0, 00:24:23.294 "nvme_adminq_poll_period_us": 10000, 00:24:23.294 "nvme_ioq_poll_period_us": 0, 00:24:23.294 "io_queue_requests": 512, 00:24:23.294 "delay_cmd_submit": true, 00:24:23.294 "transport_retry_count": 4, 00:24:23.294 "bdev_retry_count": 3, 00:24:23.294 "transport_ack_timeout": 0, 00:24:23.294 "ctrlr_loss_timeout_sec": 0, 00:24:23.294 "reconnect_delay_sec": 0, 00:24:23.294 "fast_io_fail_timeout_sec": 0, 00:24:23.294 "disable_auto_failback": false, 00:24:23.294 "generate_uuids": false, 00:24:23.294 "transport_tos": 0, 00:24:23.294 "nvme_error_stat": false, 00:24:23.294 "rdma_srq_size": 0, 00:24:23.294 "io_path_stat": false, 00:24:23.294 "allow_accel_sequence": false, 00:24:23.294 "rdma_max_cq_size": 0, 00:24:23.294 "rdma_cm_event_timeout_ms": 0, 00:24:23.294 "dhchap_digests": [ 00:24:23.294 "sha256", 00:24:23.294 "sha384", 00:24:23.294 "sha512" 00:24:23.294 ], 00:24:23.294 "dhchap_dhgroups": [ 00:24:23.294 "null", 00:24:23.294 "ffdhe2048", 00:24:23.294 "ffdhe3072", 00:24:23.294 "ffdhe4096", 00:24:23.294 "ffdhe6144", 00:24:23.294 "ffdhe8192" 00:24:23.294 ] 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_nvme_attach_controller", 00:24:23.294 "params": { 00:24:23.294 "name": "TLSTEST", 00:24:23.294 "trtype": "TCP", 00:24:23.294 "adrfam": "IPv4", 00:24:23.294 "traddr": "10.0.0.2", 00:24:23.294 "trsvcid": "4420", 00:24:23.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.294 "prchk_reftag": false, 00:24:23.294 "prchk_guard": false, 00:24:23.294 "ctrlr_loss_timeout_sec": 0, 00:24:23.294 "reconnect_delay_sec": 0, 00:24:23.294 "fast_io_fail_timeout_sec": 0, 00:24:23.294 "psk": "key0", 00:24:23.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.294 "hdgst": false, 00:24:23.294 "ddgst": false 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_nvme_set_hotplug", 00:24:23.294 "params": { 00:24:23.294 "period_us": 100000, 00:24:23.294 "enable": false 00:24:23.294 } 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "method": "bdev_wait_for_examine" 00:24:23.294 } 00:24:23.294 ] 00:24:23.294 }, 00:24:23.294 { 00:24:23.294 "subsystem": "nbd", 00:24:23.294 "config": [] 00:24:23.294 } 00:24:23.294 ] 00:24:23.294 }' 00:24:23.294 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.294 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.294 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.294 [2024-10-07 09:45:17.933294] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:23.294 [2024-10-07 09:45:17.933407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573822 ] 00:24:23.294 [2024-10-07 09:45:18.027792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.551 [2024-10-07 09:45:18.154745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.551 [2024-10-07 09:45:18.344436] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.808 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.808 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:23.808 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:24.065 Running I/O for 10 seconds... 00:24:34.267 3600.00 IOPS, 14.06 MiB/s 3583.50 IOPS, 14.00 MiB/s 3603.00 IOPS, 14.07 MiB/s 3618.50 IOPS, 14.13 MiB/s 3616.80 IOPS, 14.13 MiB/s 3623.50 IOPS, 14.15 MiB/s 3639.29 IOPS, 14.22 MiB/s 3638.00 IOPS, 14.21 MiB/s 3639.33 IOPS, 14.22 MiB/s 3640.50 IOPS, 14.22 MiB/s 00:24:34.267 Latency(us) 00:24:34.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.267 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.267 Verification LBA range: start 0x0 length 0x2000 00:24:34.267 TLSTESTn1 : 10.02 3644.97 14.24 0.00 0.00 35057.53 6893.42 29709.65 00:24:34.267 =================================================================================================================== 00:24:34.267 Total : 3644.97 14.24 0.00 0.00 35057.53 6893.42 29709.65 00:24:34.267 { 00:24:34.267 "results": [ 00:24:34.267 { 00:24:34.267 "job": "TLSTESTn1", 00:24:34.267 "core_mask": "0x4", 00:24:34.267 "workload": "verify", 00:24:34.267 "status": "finished", 00:24:34.267 "verify_range": { 00:24:34.267 "start": 0, 00:24:34.267 "length": 8192 00:24:34.267 }, 00:24:34.267 "queue_depth": 128, 00:24:34.267 "io_size": 4096, 00:24:34.267 "runtime": 10.022039, 00:24:34.267 "iops": 3644.9668575426617, 00:24:34.267 "mibps": 14.238151787276022, 00:24:34.267 "io_failed": 0, 00:24:34.267 "io_timeout": 0, 00:24:34.267 "avg_latency_us": 35057.52541931036, 00:24:34.267 "min_latency_us": 6893.416296296296, 00:24:34.267 "max_latency_us": 29709.653333333332 00:24:34.267 } 00:24:34.267 ], 00:24:34.267 "core_count": 1 00:24:34.267 } 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1573822 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1573822 ']' 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1573822 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1573822 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1573822' 00:24:34.267 killing process with pid 1573822 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1573822 00:24:34.267 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.267 00:24:34.267 Latency(us) 00:24:34.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.267 =================================================================================================================== 00:24:34.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.267 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1573822 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1573671 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1573671 ']' 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1573671 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.267 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1573671 00:24:34.526 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:34.526 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:34.526 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1573671' 00:24:34.526 killing process with pid 1573671 00:24:34.526 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1573671 00:24:34.526 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1573671 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1575146 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1575146 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1575146 ']' 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.786 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.045 [2024-10-07 09:45:29.613406] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:35.045 [2024-10-07 09:45:29.613600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.045 [2024-10-07 09:45:29.738154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.045 [2024-10-07 09:45:29.858375] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.045 [2024-10-07 09:45:29.858445] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.045 [2024-10-07 09:45:29.858462] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.045 [2024-10-07 09:45:29.858476] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.045 [2024-10-07 09:45:29.858488] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.045 [2024-10-07 09:45:29.859237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zxUnf0GL3m 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zxUnf0GL3m 00:24:35.303 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:35.869 [2024-10-07 09:45:30.456693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.869 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:36.127 09:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:36.692 [2024-10-07 09:45:31.431320] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.692 [2024-10-07 09:45:31.431597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.692 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:37.258 malloc0 00:24:37.258 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:37.824 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:38.082 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1575574 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1575574 /var/tmp/bdevperf.sock 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1575574 ']' 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.647 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.906 [2024-10-07 09:45:33.548091] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:38.906 [2024-10-07 09:45:33.548265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575574 ] 00:24:38.906 [2024-10-07 09:45:33.651243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.164 [2024-10-07 09:45:33.782235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.423 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.423 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:39.423 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:39.680 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:39.938 [2024-10-07 09:45:34.639024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.939 nvme0n1 00:24:39.939 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.197 Running I/O for 1 seconds... 00:24:41.572 2995.00 IOPS, 11.70 MiB/s 00:24:41.572 Latency(us) 00:24:41.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.572 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:41.572 Verification LBA range: start 0x0 length 0x2000 00:24:41.572 nvme0n1 : 1.02 3048.14 11.91 0.00 0.00 41525.22 6407.96 37476.88 00:24:41.572 =================================================================================================================== 00:24:41.572 Total : 3048.14 11.91 0.00 0.00 41525.22 6407.96 37476.88 00:24:41.572 { 00:24:41.572 "results": [ 00:24:41.572 { 00:24:41.572 "job": "nvme0n1", 00:24:41.572 "core_mask": "0x2", 00:24:41.572 "workload": "verify", 00:24:41.572 "status": "finished", 00:24:41.572 "verify_range": { 00:24:41.572 "start": 0, 00:24:41.572 "length": 8192 00:24:41.572 }, 00:24:41.572 "queue_depth": 128, 00:24:41.572 "io_size": 4096, 00:24:41.572 "runtime": 1.024887, 00:24:41.572 "iops": 3048.1409169986546, 00:24:41.572 "mibps": 11.906800457025994, 00:24:41.572 "io_failed": 0, 00:24:41.572 "io_timeout": 0, 00:24:41.572 "avg_latency_us": 41525.223525394795, 00:24:41.572 "min_latency_us": 6407.964444444445, 00:24:41.572 "max_latency_us": 37476.88296296296 00:24:41.572 } 00:24:41.572 ], 00:24:41.572 "core_count": 1 00:24:41.572 } 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1575574 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1575574 ']' 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1575574 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575574 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575574' 00:24:41.572 killing process with pid 1575574 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1575574 00:24:41.572 Received shutdown signal, test time was about 1.000000 seconds 00:24:41.572 00:24:41.572 Latency(us) 00:24:41.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.572 =================================================================================================================== 00:24:41.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1575574 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1575146 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1575146 ']' 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1575146 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.572 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575146 00:24:41.830 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.831 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.831 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575146' 00:24:41.831 killing process with pid 1575146 00:24:41.831 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1575146 00:24:41.831 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1575146 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1575979 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1575979 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1575979 ']' 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.089 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.089 [2024-10-07 09:45:36.811812] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:42.089 [2024-10-07 09:45:36.811957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.089 [2024-10-07 09:45:36.893558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.348 [2024-10-07 09:45:37.010865] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.348 [2024-10-07 09:45:37.010940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.348 [2024-10-07 09:45:37.010957] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.348 [2024-10-07 09:45:37.010970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.348 [2024-10-07 09:45:37.010982] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.348 [2024-10-07 09:45:37.011683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.348 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.606 [2024-10-07 09:45:37.171142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.606 malloc0 00:24:42.606 [2024-10-07 09:45:37.214251] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.606 [2024-10-07 09:45:37.214531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1576004 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1576004 /var/tmp/bdevperf.sock 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1576004 ']' 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.606 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.606 [2024-10-07 09:45:37.294178] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:42.606 [2024-10-07 09:45:37.294253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576004 ] 00:24:42.606 [2024-10-07 09:45:37.361047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.864 [2024-10-07 09:45:37.483147] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.864 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.864 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:42.864 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zxUnf0GL3m 00:24:43.122 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:43.730 [2024-10-07 09:45:38.354769] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.730 nvme0n1 00:24:43.730 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.987 Running I/O for 1 seconds... 00:24:44.921 3065.00 IOPS, 11.97 MiB/s 00:24:44.921 Latency(us) 00:24:44.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.921 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.921 Verification LBA range: start 0x0 length 0x2000 00:24:44.921 nvme0n1 : 1.02 3120.63 12.19 0.00 0.00 40581.03 6990.51 29709.65 00:24:44.921 =================================================================================================================== 00:24:44.921 Total : 3120.63 12.19 0.00 0.00 40581.03 6990.51 29709.65 00:24:44.921 { 00:24:44.921 "results": [ 00:24:44.921 { 00:24:44.921 "job": "nvme0n1", 00:24:44.921 "core_mask": "0x2", 00:24:44.921 "workload": "verify", 00:24:44.921 "status": "finished", 00:24:44.921 "verify_range": { 00:24:44.921 "start": 0, 00:24:44.921 "length": 8192 00:24:44.921 }, 00:24:44.921 "queue_depth": 128, 00:24:44.921 "io_size": 4096, 00:24:44.921 "runtime": 1.023512, 00:24:44.921 "iops": 3120.6277991855495, 00:24:44.921 "mibps": 12.189952340568553, 00:24:44.921 "io_failed": 0, 00:24:44.921 "io_timeout": 0, 00:24:44.921 "avg_latency_us": 40581.03141909599, 00:24:44.921 "min_latency_us": 6990.506666666667, 00:24:44.921 "max_latency_us": 29709.653333333332 00:24:44.921 } 00:24:44.921 ], 00:24:44.921 "core_count": 1 00:24:44.921 } 00:24:44.921 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:44.921 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.921 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.921 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.921 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:44.921 "subsystems": [ 00:24:44.921 { 00:24:44.921 "subsystem": "keyring", 00:24:44.921 "config": [ 00:24:44.921 { 00:24:44.921 "method": "keyring_file_add_key", 00:24:44.921 "params": { 00:24:44.921 "name": "key0", 00:24:44.921 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:44.921 } 00:24:44.921 } 00:24:44.921 ] 00:24:44.921 }, 00:24:44.921 { 00:24:44.921 "subsystem": "iobuf", 00:24:44.921 "config": [ 00:24:44.921 { 00:24:44.921 "method": "iobuf_set_options", 00:24:44.921 "params": { 00:24:44.921 "small_pool_count": 8192, 00:24:44.921 "large_pool_count": 1024, 00:24:44.921 "small_bufsize": 8192, 00:24:44.921 "large_bufsize": 135168 00:24:44.921 } 00:24:44.921 } 00:24:44.921 ] 00:24:44.921 }, 00:24:44.921 { 00:24:44.921 "subsystem": "sock", 00:24:44.921 "config": [ 00:24:44.921 { 00:24:44.921 "method": "sock_set_default_impl", 00:24:44.921 "params": { 00:24:44.921 "impl_name": "posix" 00:24:44.921 } 00:24:44.921 }, 00:24:44.921 { 00:24:44.921 "method": "sock_impl_set_options", 00:24:44.921 "params": { 00:24:44.921 "impl_name": "ssl", 00:24:44.921 "recv_buf_size": 4096, 00:24:44.921 "send_buf_size": 4096, 00:24:44.921 "enable_recv_pipe": true, 00:24:44.921 "enable_quickack": false, 00:24:44.921 "enable_placement_id": 0, 00:24:44.921 "enable_zerocopy_send_server": true, 00:24:44.921 "enable_zerocopy_send_client": false, 00:24:44.921 "zerocopy_threshold": 0, 00:24:44.921 "tls_version": 0, 00:24:44.921 "enable_ktls": false 00:24:44.921 } 00:24:44.921 }, 00:24:44.921 { 00:24:44.921 "method": "sock_impl_set_options", 00:24:44.921 "params": { 00:24:44.921 "impl_name": "posix", 00:24:44.921 "recv_buf_size": 2097152, 00:24:44.921 "send_buf_size": 2097152, 00:24:44.921 "enable_recv_pipe": true, 00:24:44.921 "enable_quickack": false, 00:24:44.921 "enable_placement_id": 0, 00:24:44.921 "enable_zerocopy_send_server": true, 00:24:44.921 "enable_zerocopy_send_client": false, 00:24:44.921 "zerocopy_threshold": 0, 00:24:44.921 "tls_version": 0, 00:24:44.921 "enable_ktls": false 00:24:44.921 } 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "vmd", 00:24:44.922 "config": [] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "accel", 00:24:44.922 "config": [ 00:24:44.922 { 00:24:44.922 "method": "accel_set_options", 00:24:44.922 "params": { 00:24:44.922 "small_cache_size": 128, 00:24:44.922 "large_cache_size": 16, 00:24:44.922 "task_count": 2048, 00:24:44.922 "sequence_count": 2048, 00:24:44.922 "buf_count": 2048 00:24:44.922 } 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "bdev", 00:24:44.922 "config": [ 00:24:44.922 { 00:24:44.922 "method": "bdev_set_options", 00:24:44.922 "params": { 00:24:44.922 "bdev_io_pool_size": 65535, 00:24:44.922 "bdev_io_cache_size": 256, 00:24:44.922 "bdev_auto_examine": true, 00:24:44.922 "iobuf_small_cache_size": 128, 00:24:44.922 "iobuf_large_cache_size": 16 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_raid_set_options", 00:24:44.922 "params": { 00:24:44.922 "process_window_size_kb": 1024, 00:24:44.922 "process_max_bandwidth_mb_sec": 0 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_iscsi_set_options", 00:24:44.922 "params": { 00:24:44.922 "timeout_sec": 30 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_nvme_set_options", 00:24:44.922 "params": { 00:24:44.922 "action_on_timeout": "none", 00:24:44.922 "timeout_us": 0, 00:24:44.922 "timeout_admin_us": 0, 00:24:44.922 "keep_alive_timeout_ms": 10000, 00:24:44.922 "arbitration_burst": 0, 00:24:44.922 "low_priority_weight": 0, 00:24:44.922 "medium_priority_weight": 0, 00:24:44.922 "high_priority_weight": 0, 00:24:44.922 "nvme_adminq_poll_period_us": 10000, 00:24:44.922 "nvme_ioq_poll_period_us": 0, 00:24:44.922 "io_queue_requests": 0, 00:24:44.922 "delay_cmd_submit": true, 00:24:44.922 "transport_retry_count": 4, 00:24:44.922 "bdev_retry_count": 3, 00:24:44.922 "transport_ack_timeout": 0, 00:24:44.922 "ctrlr_loss_timeout_sec": 0, 00:24:44.922 "reconnect_delay_sec": 0, 00:24:44.922 "fast_io_fail_timeout_sec": 0, 00:24:44.922 "disable_auto_failback": false, 00:24:44.922 "generate_uuids": false, 00:24:44.922 "transport_tos": 0, 00:24:44.922 "nvme_error_stat": false, 00:24:44.922 "rdma_srq_size": 0, 00:24:44.922 "io_path_stat": false, 00:24:44.922 "allow_accel_sequence": false, 00:24:44.922 "rdma_max_cq_size": 0, 00:24:44.922 "rdma_cm_event_timeout_ms": 0, 00:24:44.922 "dhchap_digests": [ 00:24:44.922 "sha256", 00:24:44.922 "sha384", 00:24:44.922 "sha512" 00:24:44.922 ], 00:24:44.922 "dhchap_dhgroups": [ 00:24:44.922 "null", 00:24:44.922 "ffdhe2048", 00:24:44.922 "ffdhe3072", 00:24:44.922 "ffdhe4096", 00:24:44.922 "ffdhe6144", 00:24:44.922 "ffdhe8192" 00:24:44.922 ] 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_nvme_set_hotplug", 00:24:44.922 "params": { 00:24:44.922 "period_us": 100000, 00:24:44.922 "enable": false 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_malloc_create", 00:24:44.922 "params": { 00:24:44.922 "name": "malloc0", 00:24:44.922 "num_blocks": 8192, 00:24:44.922 "block_size": 4096, 00:24:44.922 "physical_block_size": 4096, 00:24:44.922 "uuid": "c0ca1a1f-4f4f-432f-a3b1-dc357e0cfc1c", 00:24:44.922 "optimal_io_boundary": 0, 00:24:44.922 "md_size": 0, 00:24:44.922 "dif_type": 0, 00:24:44.922 "dif_is_head_of_md": false, 00:24:44.922 "dif_pi_format": 0 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "bdev_wait_for_examine" 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "nbd", 00:24:44.922 "config": [] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "scheduler", 00:24:44.922 "config": [ 00:24:44.922 { 00:24:44.922 "method": "framework_set_scheduler", 00:24:44.922 "params": { 00:24:44.922 "name": "static" 00:24:44.922 } 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "subsystem": "nvmf", 00:24:44.922 "config": [ 00:24:44.922 { 00:24:44.922 "method": "nvmf_set_config", 00:24:44.922 "params": { 00:24:44.922 "discovery_filter": "match_any", 00:24:44.922 "admin_cmd_passthru": { 00:24:44.922 "identify_ctrlr": false 00:24:44.922 }, 00:24:44.922 "dhchap_digests": [ 00:24:44.922 "sha256", 00:24:44.922 "sha384", 00:24:44.922 "sha512" 00:24:44.922 ], 00:24:44.922 "dhchap_dhgroups": [ 00:24:44.922 "null", 00:24:44.922 "ffdhe2048", 00:24:44.922 "ffdhe3072", 00:24:44.922 "ffdhe4096", 00:24:44.922 "ffdhe6144", 00:24:44.922 "ffdhe8192" 00:24:44.922 ] 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_set_max_subsystems", 00:24:44.922 "params": { 00:24:44.922 "max_subsystems": 1024 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_set_crdt", 00:24:44.922 "params": { 00:24:44.922 "crdt1": 0, 00:24:44.922 "crdt2": 0, 00:24:44.922 "crdt3": 0 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_create_transport", 00:24:44.922 "params": { 00:24:44.922 "trtype": "TCP", 00:24:44.922 "max_queue_depth": 128, 00:24:44.922 "max_io_qpairs_per_ctrlr": 127, 00:24:44.922 "in_capsule_data_size": 4096, 00:24:44.922 "max_io_size": 131072, 00:24:44.922 "io_unit_size": 131072, 00:24:44.922 "max_aq_depth": 128, 00:24:44.922 "num_shared_buffers": 511, 00:24:44.922 "buf_cache_size": 4294967295, 00:24:44.922 "dif_insert_or_strip": false, 00:24:44.922 "zcopy": false, 00:24:44.922 "c2h_success": false, 00:24:44.922 "sock_priority": 0, 00:24:44.922 "abort_timeout_sec": 1, 00:24:44.922 "ack_timeout": 0, 00:24:44.922 "data_wr_pool_size": 0 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_create_subsystem", 00:24:44.922 "params": { 00:24:44.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.922 "allow_any_host": false, 00:24:44.922 "serial_number": "00000000000000000000", 00:24:44.922 "model_number": "SPDK bdev Controller", 00:24:44.922 "max_namespaces": 32, 00:24:44.922 "min_cntlid": 1, 00:24:44.922 "max_cntlid": 65519, 00:24:44.922 "ana_reporting": false 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_subsystem_add_host", 00:24:44.922 "params": { 00:24:44.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.922 "host": "nqn.2016-06.io.spdk:host1", 00:24:44.922 "psk": "key0" 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_subsystem_add_ns", 00:24:44.922 "params": { 00:24:44.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.922 "namespace": { 00:24:44.922 "nsid": 1, 00:24:44.922 "bdev_name": "malloc0", 00:24:44.922 "nguid": "C0CA1A1F4F4F432FA3B1DC357E0CFC1C", 00:24:44.922 "uuid": "c0ca1a1f-4f4f-432f-a3b1-dc357e0cfc1c", 00:24:44.922 "no_auto_visible": false 00:24:44.922 } 00:24:44.922 } 00:24:44.922 }, 00:24:44.922 { 00:24:44.922 "method": "nvmf_subsystem_add_listener", 00:24:44.922 "params": { 00:24:44.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.922 "listen_address": { 00:24:44.922 "trtype": "TCP", 00:24:44.922 "adrfam": "IPv4", 00:24:44.922 "traddr": "10.0.0.2", 00:24:44.922 "trsvcid": "4420" 00:24:44.922 }, 00:24:44.922 "secure_channel": false, 00:24:44.922 "sock_impl": "ssl" 00:24:44.922 } 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 } 00:24:44.922 ] 00:24:44.922 }' 00:24:44.922 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:45.856 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:45.856 "subsystems": [ 00:24:45.856 { 00:24:45.856 "subsystem": "keyring", 00:24:45.856 "config": [ 00:24:45.856 { 00:24:45.856 "method": "keyring_file_add_key", 00:24:45.857 "params": { 00:24:45.857 "name": "key0", 00:24:45.857 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:45.857 } 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "iobuf", 00:24:45.857 "config": [ 00:24:45.857 { 00:24:45.857 "method": "iobuf_set_options", 00:24:45.857 "params": { 00:24:45.857 "small_pool_count": 8192, 00:24:45.857 "large_pool_count": 1024, 00:24:45.857 "small_bufsize": 8192, 00:24:45.857 "large_bufsize": 135168 00:24:45.857 } 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "sock", 00:24:45.857 "config": [ 00:24:45.857 { 00:24:45.857 "method": "sock_set_default_impl", 00:24:45.857 "params": { 00:24:45.857 "impl_name": "posix" 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "sock_impl_set_options", 00:24:45.857 "params": { 00:24:45.857 "impl_name": "ssl", 00:24:45.857 "recv_buf_size": 4096, 00:24:45.857 "send_buf_size": 4096, 00:24:45.857 "enable_recv_pipe": true, 00:24:45.857 "enable_quickack": false, 00:24:45.857 "enable_placement_id": 0, 00:24:45.857 "enable_zerocopy_send_server": true, 00:24:45.857 "enable_zerocopy_send_client": false, 00:24:45.857 "zerocopy_threshold": 0, 00:24:45.857 "tls_version": 0, 00:24:45.857 "enable_ktls": false 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "sock_impl_set_options", 00:24:45.857 "params": { 00:24:45.857 "impl_name": "posix", 00:24:45.857 "recv_buf_size": 2097152, 00:24:45.857 "send_buf_size": 2097152, 00:24:45.857 "enable_recv_pipe": true, 00:24:45.857 "enable_quickack": false, 00:24:45.857 "enable_placement_id": 0, 00:24:45.857 "enable_zerocopy_send_server": true, 00:24:45.857 "enable_zerocopy_send_client": false, 00:24:45.857 "zerocopy_threshold": 0, 00:24:45.857 "tls_version": 0, 00:24:45.857 "enable_ktls": false 00:24:45.857 } 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "vmd", 00:24:45.857 "config": [] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "accel", 00:24:45.857 "config": [ 00:24:45.857 { 00:24:45.857 "method": "accel_set_options", 00:24:45.857 "params": { 00:24:45.857 "small_cache_size": 128, 00:24:45.857 "large_cache_size": 16, 00:24:45.857 "task_count": 2048, 00:24:45.857 "sequence_count": 2048, 00:24:45.857 "buf_count": 2048 00:24:45.857 } 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "bdev", 00:24:45.857 "config": [ 00:24:45.857 { 00:24:45.857 "method": "bdev_set_options", 00:24:45.857 "params": { 00:24:45.857 "bdev_io_pool_size": 65535, 00:24:45.857 "bdev_io_cache_size": 256, 00:24:45.857 "bdev_auto_examine": true, 00:24:45.857 "iobuf_small_cache_size": 128, 00:24:45.857 "iobuf_large_cache_size": 16 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_raid_set_options", 00:24:45.857 "params": { 00:24:45.857 "process_window_size_kb": 1024, 00:24:45.857 "process_max_bandwidth_mb_sec": 0 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_iscsi_set_options", 00:24:45.857 "params": { 00:24:45.857 "timeout_sec": 30 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_nvme_set_options", 00:24:45.857 "params": { 00:24:45.857 "action_on_timeout": "none", 00:24:45.857 "timeout_us": 0, 00:24:45.857 "timeout_admin_us": 0, 00:24:45.857 "keep_alive_timeout_ms": 10000, 00:24:45.857 "arbitration_burst": 0, 00:24:45.857 "low_priority_weight": 0, 00:24:45.857 "medium_priority_weight": 0, 00:24:45.857 "high_priority_weight": 0, 00:24:45.857 "nvme_adminq_poll_period_us": 10000, 00:24:45.857 "nvme_ioq_poll_period_us": 0, 00:24:45.857 "io_queue_requests": 512, 00:24:45.857 "delay_cmd_submit": true, 00:24:45.857 "transport_retry_count": 4, 00:24:45.857 "bdev_retry_count": 3, 00:24:45.857 "transport_ack_timeout": 0, 00:24:45.857 "ctrlr_loss_timeout_sec": 0, 00:24:45.857 "reconnect_delay_sec": 0, 00:24:45.857 "fast_io_fail_timeout_sec": 0, 00:24:45.857 "disable_auto_failback": false, 00:24:45.857 "generate_uuids": false, 00:24:45.857 "transport_tos": 0, 00:24:45.857 "nvme_error_stat": false, 00:24:45.857 "rdma_srq_size": 0, 00:24:45.857 "io_path_stat": false, 00:24:45.857 "allow_accel_sequence": false, 00:24:45.857 "rdma_max_cq_size": 0, 00:24:45.857 "rdma_cm_event_timeout_ms": 0, 00:24:45.857 "dhchap_digests": [ 00:24:45.857 "sha256", 00:24:45.857 "sha384", 00:24:45.857 "sha512" 00:24:45.857 ], 00:24:45.857 "dhchap_dhgroups": [ 00:24:45.857 "null", 00:24:45.857 "ffdhe2048", 00:24:45.857 "ffdhe3072", 00:24:45.857 "ffdhe4096", 00:24:45.857 "ffdhe6144", 00:24:45.857 "ffdhe8192" 00:24:45.857 ] 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_nvme_attach_controller", 00:24:45.857 "params": { 00:24:45.857 "name": "nvme0", 00:24:45.857 "trtype": "TCP", 00:24:45.857 "adrfam": "IPv4", 00:24:45.857 "traddr": "10.0.0.2", 00:24:45.857 "trsvcid": "4420", 00:24:45.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.857 "prchk_reftag": false, 00:24:45.857 "prchk_guard": false, 00:24:45.857 "ctrlr_loss_timeout_sec": 0, 00:24:45.857 "reconnect_delay_sec": 0, 00:24:45.857 "fast_io_fail_timeout_sec": 0, 00:24:45.857 "psk": "key0", 00:24:45.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.857 "hdgst": false, 00:24:45.857 "ddgst": false 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_nvme_set_hotplug", 00:24:45.857 "params": { 00:24:45.857 "period_us": 100000, 00:24:45.857 "enable": false 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_enable_histogram", 00:24:45.857 "params": { 00:24:45.857 "name": "nvme0n1", 00:24:45.857 "enable": true 00:24:45.857 } 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "method": "bdev_wait_for_examine" 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }, 00:24:45.857 { 00:24:45.857 "subsystem": "nbd", 00:24:45.857 "config": [] 00:24:45.857 } 00:24:45.857 ] 00:24:45.857 }' 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1576004 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1576004 ']' 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1576004 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1576004 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1576004' 00:24:45.857 killing process with pid 1576004 00:24:45.857 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1576004 00:24:45.857 Received shutdown signal, test time was about 1.000000 seconds 00:24:45.857 00:24:45.858 Latency(us) 00:24:45.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.858 =================================================================================================================== 00:24:45.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.858 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1576004 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1575979 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1575979 ']' 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1575979 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575979 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575979' 00:24:46.115 killing process with pid 1575979 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1575979 00:24:46.115 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1575979 00:24:46.374 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:46.374 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:46.374 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:46.374 "subsystems": [ 00:24:46.374 { 00:24:46.374 "subsystem": "keyring", 00:24:46.374 "config": [ 00:24:46.374 { 00:24:46.374 "method": "keyring_file_add_key", 00:24:46.374 "params": { 00:24:46.374 "name": "key0", 00:24:46.374 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:46.374 } 00:24:46.374 } 00:24:46.374 ] 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "subsystem": "iobuf", 00:24:46.374 "config": [ 00:24:46.374 { 00:24:46.374 "method": "iobuf_set_options", 00:24:46.374 "params": { 00:24:46.374 "small_pool_count": 8192, 00:24:46.374 "large_pool_count": 1024, 00:24:46.374 "small_bufsize": 8192, 00:24:46.374 "large_bufsize": 135168 00:24:46.374 } 00:24:46.374 } 00:24:46.374 ] 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "subsystem": "sock", 00:24:46.374 "config": [ 00:24:46.374 { 00:24:46.374 "method": "sock_set_default_impl", 00:24:46.374 "params": { 00:24:46.374 "impl_name": "posix" 00:24:46.374 } 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "method": "sock_impl_set_options", 00:24:46.374 "params": { 00:24:46.374 "impl_name": "ssl", 00:24:46.374 "recv_buf_size": 4096, 00:24:46.374 "send_buf_size": 4096, 00:24:46.374 "enable_recv_pipe": true, 00:24:46.374 "enable_quickack": false, 00:24:46.374 "enable_placement_id": 0, 00:24:46.374 "enable_zerocopy_send_server": true, 00:24:46.374 "enable_zerocopy_send_client": false, 00:24:46.374 "zerocopy_threshold": 0, 00:24:46.374 "tls_version": 0, 00:24:46.374 "enable_ktls": false 00:24:46.374 } 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "method": "sock_impl_set_options", 00:24:46.374 "params": { 00:24:46.374 "impl_name": "posix", 00:24:46.374 "recv_buf_size": 2097152, 00:24:46.374 "send_buf_size": 2097152, 00:24:46.374 "enable_recv_pipe": true, 00:24:46.374 "enable_quickack": false, 00:24:46.374 "enable_placement_id": 0, 00:24:46.374 "enable_zerocopy_send_server": true, 00:24:46.374 "enable_zerocopy_send_client": false, 00:24:46.374 "zerocopy_threshold": 0, 00:24:46.374 "tls_version": 0, 00:24:46.374 "enable_ktls": false 00:24:46.374 } 00:24:46.374 } 00:24:46.374 ] 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "subsystem": "vmd", 00:24:46.374 "config": [] 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "subsystem": "accel", 00:24:46.374 "config": [ 00:24:46.374 { 00:24:46.374 "method": "accel_set_options", 00:24:46.374 "params": { 00:24:46.374 "small_cache_size": 128, 00:24:46.374 "large_cache_size": 16, 00:24:46.374 "task_count": 2048, 00:24:46.374 "sequence_count": 2048, 00:24:46.374 "buf_count": 2048 00:24:46.374 } 00:24:46.374 } 00:24:46.374 ] 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "subsystem": "bdev", 00:24:46.374 "config": [ 00:24:46.374 { 00:24:46.374 "method": "bdev_set_options", 00:24:46.374 "params": { 00:24:46.374 "bdev_io_pool_size": 65535, 00:24:46.374 "bdev_io_cache_size": 256, 00:24:46.374 "bdev_auto_examine": true, 00:24:46.374 "iobuf_small_cache_size": 128, 00:24:46.374 "iobuf_large_cache_size": 16 00:24:46.374 } 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "method": "bdev_raid_set_options", 00:24:46.374 "params": { 00:24:46.374 "process_window_size_kb": 1024, 00:24:46.374 "process_max_bandwidth_mb_sec": 0 00:24:46.374 } 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "method": "bdev_iscsi_set_options", 00:24:46.374 "params": { 00:24:46.374 "timeout_sec": 30 00:24:46.374 } 00:24:46.374 }, 00:24:46.374 { 00:24:46.374 "method": "bdev_nvme_set_options", 00:24:46.374 "params": { 00:24:46.374 "action_on_timeout": "none", 00:24:46.374 "timeout_us": 0, 00:24:46.374 "timeout_admin_us": 0, 00:24:46.374 "keep_alive_timeout_ms": 10000, 00:24:46.374 "arbitration_burst": 0, 00:24:46.374 "low_priority_weight": 0, 00:24:46.374 "medium_priority_weight": 0, 00:24:46.374 "high_priority_weight": 0, 00:24:46.374 "nvme_adminq_poll_period_us": 10000, 00:24:46.374 "nvme_ioq_poll_period_us": 0, 00:24:46.374 "io_queue_requests": 0, 00:24:46.374 "delay_cmd_submit": true, 00:24:46.374 "transport_retry_count": 4, 00:24:46.374 "bdev_retry_count": 3, 00:24:46.374 "transport_ack_timeout": 0, 00:24:46.374 "ctrlr_loss_timeout_sec": 0, 00:24:46.374 "reconnect_delay_sec": 0, 00:24:46.374 "fast_io_fail_timeout_sec": 0, 00:24:46.374 "disable_auto_failback": false, 00:24:46.375 "generate_uuids": false, 00:24:46.375 "transport_tos": 0, 00:24:46.375 "nvme_error_stat": false, 00:24:46.375 "rdma_srq_size": 0, 00:24:46.375 "io_path_stat": false, 00:24:46.375 "allow_accel_sequence": false, 00:24:46.375 "rdma_max_cq_size": 0, 00:24:46.375 "rdma_cm_event_timeout_ms": 0, 00:24:46.375 "dhchap_digests": [ 00:24:46.375 "sha256", 00:24:46.375 "sha384", 00:24:46.375 "sha512" 00:24:46.375 ], 00:24:46.375 "dhchap_dhgroups": [ 00:24:46.375 "null", 00:24:46.375 "ffdhe2048", 00:24:46.375 "ffdhe3072", 00:24:46.375 "ffdhe4096", 00:24:46.375 "ffdhe6144", 00:24:46.375 "ffdhe8192" 00:24:46.375 ] 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "bdev_nvme_set_hotplug", 00:24:46.375 "params": { 00:24:46.375 "period_us": 100000, 00:24:46.375 "enable": false 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "bdev_malloc_create", 00:24:46.375 "params": { 00:24:46.375 "name": "malloc0", 00:24:46.375 "num_blocks": 8192, 00:24:46.375 "block_size": 4096, 00:24:46.375 "physical_block_size": 4096, 00:24:46.375 "uuid": "c0ca1a1f-4f4f-432f-a3b1-dc357e0cfc1c", 00:24:46.375 "optimal_io_boundary": 0, 00:24:46.375 "md_size": 0, 00:24:46.375 "dif_type": 0, 00:24:46.375 "dif_is_head_of_md": false, 00:24:46.375 "dif_pi_format": 0 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "bdev_wait_for_examine" 00:24:46.375 } 00:24:46.375 ] 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "subsystem": "nbd", 00:24:46.375 "config": [] 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "subsystem": "scheduler", 00:24:46.375 "config": [ 00:24:46.375 { 00:24:46.375 "method": "framework_set_scheduler", 00:24:46.375 "params": { 00:24:46.375 "name": "static" 00:24:46.375 } 00:24:46.375 } 00:24:46.375 ] 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "subsystem": "nvmf", 00:24:46.375 "config": [ 00:24:46.375 { 00:24:46.375 "method": "nvmf_set_config", 00:24:46.375 "params": { 00:24:46.375 "discovery_filter": "match_any", 00:24:46.375 "admin_cmd_passthru": { 00:24:46.375 "identify_ctrlr": false 00:24:46.375 }, 00:24:46.375 "dhchap_digests": [ 00:24:46.375 "sha256", 00:24:46.375 "sha384", 00:24:46.375 "sha512" 00:24:46.375 ], 00:24:46.375 "dhchap_dhgroups": [ 00:24:46.375 "null", 00:24:46.375 "ffdhe2048", 00:24:46.375 "ffdhe3072", 00:24:46.375 "ffdhe4096", 00:24:46.375 "ffdhe6144", 00:24:46.375 "ffdhe8192" 00:24:46.375 ] 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_set_max_subsystems", 00:24:46.375 "params": { 00:24:46.375 "max_subsystems": 1024 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_set_crdt", 00:24:46.375 "params": { 00:24:46.375 "crdt1": 0, 00:24:46.375 "crdt2": 0, 00:24:46.375 "crdt3": 0 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_create_transport", 00:24:46.375 "params": { 00:24:46.375 "trtype": "TCP", 00:24:46.375 "max_queue_depth": 128, 00:24:46.375 "max_io_qpairs_per_ctrlr": 127, 00:24:46.375 "in_capsule_data_size": 4096, 00:24:46.375 "max_io_size": 131072, 00:24:46.375 "io_unit_size": 131072, 00:24:46.375 "max_aq_depth": 128, 00:24:46.375 "num_shared_buffers": 511, 00:24:46.375 "buf_cache_size": 4294967295, 00:24:46.375 "dif_insert_or_strip": false, 00:24:46.375 "zcopy": false, 00:24:46.375 "c2h_success": false, 00:24:46.375 "sock_priority": 0, 00:24:46.375 "abort_timeout_sec": 1, 00:24:46.375 "ack_timeout": 0, 00:24:46.375 "data_wr_pool_size": 0 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_create_subsystem", 00:24:46.375 "params": { 00:24:46.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.375 "allow_any_host": false, 00:24:46.375 "serial_number": "00000000000000000000", 00:24:46.375 "model_number": "SPDK bdev Controller", 00:24:46.375 "max_namespaces": 32, 00:24:46.375 "min_cntlid": 1, 00:24:46.375 "max_cntlid": 65519, 00:24:46.375 "ana_reporting": false 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_subsystem_add_host", 00:24:46.375 "params": { 00:24:46.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.375 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.375 "psk": "key0" 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_subsystem_add_ns", 00:24:46.375 "params": { 00:24:46.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.375 "namespace": { 00:24:46.375 "nsid": 1, 00:24:46.375 "bdev_name": "malloc0", 00:24:46.375 "nguid": "C0CA1A1F4F4F432FA3B1DC357E0CFC1C", 00:24:46.375 "uuid": "c0ca1a1f-4f4f-432f-a3b1-dc357e0cfc1c", 00:24:46.375 "no_auto_visible": false 00:24:46.375 } 00:24:46.375 } 00:24:46.375 }, 00:24:46.375 { 00:24:46.375 "method": "nvmf_subsystem_add_listener", 00:24:46.375 "params": { 00:24:46.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.375 "listen_address": { 00:24:46.375 "trtype": "TCP", 00:24:46.375 "adrfam": "IPv4", 00:24:46.375 "traddr": "10.0.0.2", 00:24:46.375 "trsvcid": "4420" 00:24:46.375 }, 00:24:46.375 "secure_channel": false, 00:24:46.375 "sock_impl": "ssl" 00:24:46.375 } 00:24:46.375 } 00:24:46.375 ] 00:24:46.375 } 00:24:46.375 ] 00:24:46.375 }' 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1576539 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1576539 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1576539 ']' 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.375 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.634 [2024-10-07 09:45:41.200055] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:46.634 [2024-10-07 09:45:41.200157] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.634 [2024-10-07 09:45:41.273504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.634 [2024-10-07 09:45:41.392454] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.634 [2024-10-07 09:45:41.392517] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.634 [2024-10-07 09:45:41.392541] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.634 [2024-10-07 09:45:41.392556] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.634 [2024-10-07 09:45:41.392568] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.634 [2024-10-07 09:45:41.393340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.892 [2024-10-07 09:45:41.662759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.892 [2024-10-07 09:45:41.694778] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.892 [2024-10-07 09:45:41.695049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1576687 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1576687 /var/tmp/bdevperf.sock 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1576687 ']' 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.826 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:47.826 "subsystems": [ 00:24:47.826 { 00:24:47.826 "subsystem": "keyring", 00:24:47.826 "config": [ 00:24:47.826 { 00:24:47.826 "method": "keyring_file_add_key", 00:24:47.826 "params": { 00:24:47.826 "name": "key0", 00:24:47.826 "path": "/tmp/tmp.zxUnf0GL3m" 00:24:47.826 } 00:24:47.826 } 00:24:47.826 ] 00:24:47.826 }, 00:24:47.826 { 00:24:47.826 "subsystem": "iobuf", 00:24:47.826 "config": [ 00:24:47.826 { 00:24:47.826 "method": "iobuf_set_options", 00:24:47.826 "params": { 00:24:47.826 "small_pool_count": 8192, 00:24:47.826 "large_pool_count": 1024, 00:24:47.826 "small_bufsize": 8192, 00:24:47.826 "large_bufsize": 135168 00:24:47.826 } 00:24:47.826 } 00:24:47.826 ] 00:24:47.826 }, 00:24:47.826 { 00:24:47.826 "subsystem": "sock", 00:24:47.826 "config": [ 00:24:47.826 { 00:24:47.826 "method": "sock_set_default_impl", 00:24:47.826 "params": { 00:24:47.826 "impl_name": "posix" 00:24:47.826 } 00:24:47.826 }, 00:24:47.826 { 00:24:47.826 "method": "sock_impl_set_options", 00:24:47.826 "params": { 00:24:47.826 "impl_name": "ssl", 00:24:47.826 "recv_buf_size": 4096, 00:24:47.826 "send_buf_size": 4096, 00:24:47.826 "enable_recv_pipe": true, 00:24:47.826 "enable_quickack": false, 00:24:47.826 "enable_placement_id": 0, 00:24:47.826 "enable_zerocopy_send_server": true, 00:24:47.826 "enable_zerocopy_send_client": false, 00:24:47.826 "zerocopy_threshold": 0, 00:24:47.827 "tls_version": 0, 00:24:47.827 "enable_ktls": false 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "sock_impl_set_options", 00:24:47.827 "params": { 00:24:47.827 "impl_name": "posix", 00:24:47.827 "recv_buf_size": 2097152, 00:24:47.827 "send_buf_size": 2097152, 00:24:47.827 "enable_recv_pipe": true, 00:24:47.827 "enable_quickack": false, 00:24:47.827 "enable_placement_id": 0, 00:24:47.827 "enable_zerocopy_send_server": true, 00:24:47.827 "enable_zerocopy_send_client": false, 00:24:47.827 "zerocopy_threshold": 0, 00:24:47.827 "tls_version": 0, 00:24:47.827 "enable_ktls": false 00:24:47.827 } 00:24:47.827 } 00:24:47.827 ] 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "subsystem": "vmd", 00:24:47.827 "config": [] 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "subsystem": "accel", 00:24:47.827 "config": [ 00:24:47.827 { 00:24:47.827 "method": "accel_set_options", 00:24:47.827 "params": { 00:24:47.827 "small_cache_size": 128, 00:24:47.827 "large_cache_size": 16, 00:24:47.827 "task_count": 2048, 00:24:47.827 "sequence_count": 2048, 00:24:47.827 "buf_count": 2048 00:24:47.827 } 00:24:47.827 } 00:24:47.827 ] 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "subsystem": "bdev", 00:24:47.827 "config": [ 00:24:47.827 { 00:24:47.827 "method": "bdev_set_options", 00:24:47.827 "params": { 00:24:47.827 "bdev_io_pool_size": 65535, 00:24:47.827 "bdev_io_cache_size": 256, 00:24:47.827 "bdev_auto_examine": true, 00:24:47.827 "iobuf_small_cache_size": 128, 00:24:47.827 "iobuf_large_cache_size": 16 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_raid_set_options", 00:24:47.827 "params": { 00:24:47.827 "process_window_size_kb": 1024, 00:24:47.827 "process_max_bandwidth_mb_sec": 0 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_iscsi_set_options", 00:24:47.827 "params": { 00:24:47.827 "timeout_sec": 30 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_nvme_set_options", 00:24:47.827 "params": { 00:24:47.827 "action_on_timeout": "none", 00:24:47.827 "timeout_us": 0, 00:24:47.827 "timeout_admin_us": 0, 00:24:47.827 "keep_alive_timeout_ms": 10000, 00:24:47.827 "arbitration_burst": 0, 00:24:47.827 "low_priority_weight": 0, 00:24:47.827 "medium_priority_weight": 0, 00:24:47.827 "high_priority_weight": 0, 00:24:47.827 "nvme_adminq_poll_period_us": 10000, 00:24:47.827 "nvme_ioq_poll_period_us": 0, 00:24:47.827 "io_queue_requests": 512, 00:24:47.827 "delay_cmd_submit": true, 00:24:47.827 "transport_retry_count": 4, 00:24:47.827 "bdev_retry_count": 3, 00:24:47.827 "transport_ack_timeout": 0, 00:24:47.827 "ctrlr_loss_timeout_sec": 0, 00:24:47.827 "reconnect_delay_sec": 0, 00:24:47.827 "fast_io_fail_timeout_sec": 0, 00:24:47.827 "disable_auto_failback": false, 00:24:47.827 "generate_uuids": false, 00:24:47.827 "transport_tos": 0, 00:24:47.827 "nvme_error_stat": false, 00:24:47.827 "rdma_srq_size": 0, 00:24:47.827 "io_path_stat": false, 00:24:47.827 "allow_accel_sequence": false, 00:24:47.827 "rdma_max_cq_size": 0, 00:24:47.827 "rdma_cm_event_timeout_ms": 0, 00:24:47.827 "dhchap_digests": [ 00:24:47.827 "sha256", 00:24:47.827 "sha384", 00:24:47.827 "sha512" 00:24:47.827 ], 00:24:47.827 "dhchap_dhgroups": [ 00:24:47.827 "null", 00:24:47.827 "ffdhe2048", 00:24:47.827 "ffdhe3072", 00:24:47.827 "ffdhe4096", 00:24:47.827 "ffdhe6144", 00:24:47.827 "ffdhe8192" 00:24:47.827 ] 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_nvme_attach_controller", 00:24:47.827 "params": { 00:24:47.827 "name": "nvme0", 00:24:47.827 "trtype": "TCP", 00:24:47.827 "adrfam": "IPv4", 00:24:47.827 "traddr": "10.0.0.2", 00:24:47.827 "trsvcid": "4420", 00:24:47.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.827 "prchk_reftag": false, 00:24:47.827 "prchk_guard": false, 00:24:47.827 "ctrlr_loss_timeout_sec": 0, 00:24:47.827 "reconnect_delay_sec": 0, 00:24:47.827 "fast_io_fail_timeout_sec": 0, 00:24:47.827 "psk": "key0", 00:24:47.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.827 "hdgst": false, 00:24:47.827 "ddgst": false 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_nvme_set_hotplug", 00:24:47.827 "params": { 00:24:47.827 "period_us": 100000, 00:24:47.827 "enable": false 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_enable_histogram", 00:24:47.827 "params": { 00:24:47.827 "name": "nvme0n1", 00:24:47.827 "enable": true 00:24:47.827 } 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "method": "bdev_wait_for_examine" 00:24:47.827 } 00:24:47.827 ] 00:24:47.827 }, 00:24:47.827 { 00:24:47.827 "subsystem": "nbd", 00:24:47.827 "config": [] 00:24:47.827 } 00:24:47.827 ] 00:24:47.827 }' 00:24:47.827 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.827 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.827 [2024-10-07 09:45:42.431316] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:47.827 [2024-10-07 09:45:42.431413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576687 ] 00:24:47.827 [2024-10-07 09:45:42.500048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.827 [2024-10-07 09:45:42.625583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.086 [2024-10-07 09:45:42.815367] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.344 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.344 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:48.344 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.344 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:48.601 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.602 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.602 Running I/O for 1 seconds... 00:24:49.976 3159.00 IOPS, 12.34 MiB/s 00:24:49.976 Latency(us) 00:24:49.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.976 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:49.976 Verification LBA range: start 0x0 length 0x2000 00:24:49.976 nvme0n1 : 1.02 3219.78 12.58 0.00 0.00 39357.79 7524.50 33399.09 00:24:49.976 =================================================================================================================== 00:24:49.976 Total : 3219.78 12.58 0.00 0.00 39357.79 7524.50 33399.09 00:24:49.976 { 00:24:49.976 "results": [ 00:24:49.976 { 00:24:49.976 "job": "nvme0n1", 00:24:49.976 "core_mask": "0x2", 00:24:49.976 "workload": "verify", 00:24:49.976 "status": "finished", 00:24:49.976 "verify_range": { 00:24:49.976 "start": 0, 00:24:49.976 "length": 8192 00:24:49.976 }, 00:24:49.976 "queue_depth": 128, 00:24:49.976 "io_size": 4096, 00:24:49.976 "runtime": 1.021187, 00:24:49.976 "iops": 3219.7824688328387, 00:24:49.976 "mibps": 12.577275268878276, 00:24:49.976 "io_failed": 0, 00:24:49.976 "io_timeout": 0, 00:24:49.976 "avg_latency_us": 39357.794805803365, 00:24:49.976 "min_latency_us": 7524.503703703704, 00:24:49.976 "max_latency_us": 33399.08740740741 00:24:49.976 } 00:24:49.976 ], 00:24:49.976 "core_count": 1 00:24:49.976 } 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:49.976 nvmf_trace.0 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1576687 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1576687 ']' 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1576687 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1576687 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1576687' 00:24:49.976 killing process with pid 1576687 00:24:49.976 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1576687 00:24:49.976 Received shutdown signal, test time was about 1.000000 seconds 00:24:49.976 00:24:49.977 Latency(us) 00:24:49.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.977 =================================================================================================================== 00:24:49.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.977 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1576687 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.235 rmmod nvme_tcp 00:24:50.235 rmmod nvme_fabrics 00:24:50.235 rmmod nvme_keyring 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1576539 ']' 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1576539 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1576539 ']' 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1576539 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1576539 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1576539' 00:24:50.235 killing process with pid 1576539 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1576539 00:24:50.235 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1576539 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.494 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.q4gIwy6V06 /tmp/tmp.y0nrnNOFoR /tmp/tmp.zxUnf0GL3m 00:24:53.037 00:24:53.037 real 1m39.491s 00:24:53.037 user 2m49.043s 00:24:53.037 sys 0m30.924s 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.037 ************************************ 00:24:53.037 END TEST nvmf_tls 00:24:53.037 ************************************ 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:53.037 ************************************ 00:24:53.037 START TEST nvmf_fips 00:24:53.037 ************************************ 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:53.037 * Looking for test storage... 00:24:53.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.037 --rc genhtml_branch_coverage=1 00:24:53.037 --rc genhtml_function_coverage=1 00:24:53.037 --rc genhtml_legend=1 00:24:53.037 --rc geninfo_all_blocks=1 00:24:53.037 --rc geninfo_unexecuted_blocks=1 00:24:53.037 00:24:53.037 ' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.037 --rc genhtml_branch_coverage=1 00:24:53.037 --rc genhtml_function_coverage=1 00:24:53.037 --rc genhtml_legend=1 00:24:53.037 --rc geninfo_all_blocks=1 00:24:53.037 --rc geninfo_unexecuted_blocks=1 00:24:53.037 00:24:53.037 ' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.037 --rc genhtml_branch_coverage=1 00:24:53.037 --rc genhtml_function_coverage=1 00:24:53.037 --rc genhtml_legend=1 00:24:53.037 --rc geninfo_all_blocks=1 00:24:53.037 --rc geninfo_unexecuted_blocks=1 00:24:53.037 00:24:53.037 ' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.037 --rc genhtml_branch_coverage=1 00:24:53.037 --rc genhtml_function_coverage=1 00:24:53.037 --rc genhtml_legend=1 00:24:53.037 --rc geninfo_all_blocks=1 00:24:53.037 --rc geninfo_unexecuted_blocks=1 00:24:53.037 00:24:53.037 ' 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.037 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:53.038 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:53.039 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:53.298 Error setting digest 00:24:53.298 40D24D177A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:53.298 40D24D177A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:53.298 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:53.298 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.298 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.299 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:55.830 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:55.830 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:55.830 Found net devices under 0000:84:00.0: cvl_0_0 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:55.830 Found net devices under 0000:84:00.1: cvl_0_1 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.830 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.831 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:24:56.090 00:24:56.090 --- 10.0.0.2 ping statistics --- 00:24:56.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.090 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:56.090 00:24:56.090 --- 10.0.0.1 ping statistics --- 00:24:56.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.090 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1579075 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1579075 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1579075 ']' 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.090 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.090 [2024-10-07 09:45:50.783512] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:56.090 [2024-10-07 09:45:50.783601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.090 [2024-10-07 09:45:50.857038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.350 [2024-10-07 09:45:50.978351] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.350 [2024-10-07 09:45:50.978420] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.350 [2024-10-07 09:45:50.978438] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.350 [2024-10-07 09:45:50.978452] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.350 [2024-10-07 09:45:50.978463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.350 [2024-10-07 09:45:50.979175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.350 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.350 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:56.350 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:56.350 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.350 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.pmV 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.pmV 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.pmV 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.pmV 00:24:56.609 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:56.869 [2024-10-07 09:45:51.508663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.869 [2024-10-07 09:45:51.524731] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.869 [2024-10-07 09:45:51.525126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.869 malloc0 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1579225 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1579225 /var/tmp/bdevperf.sock 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1579225 ']' 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.869 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.869 [2024-10-07 09:45:51.683536] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:24:56.869 [2024-10-07 09:45:51.683630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579225 ] 00:24:57.128 [2024-10-07 09:45:51.747011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.128 [2024-10-07 09:45:51.855071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.386 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.386 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:57.386 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.pmV 00:24:57.644 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:57.902 [2024-10-07 09:45:52.523057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.902 TLSTESTn1 00:24:57.902 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:57.902 Running I/O for 10 seconds... 00:25:08.125 3428.00 IOPS, 13.39 MiB/s 3586.00 IOPS, 14.01 MiB/s 3617.33 IOPS, 14.13 MiB/s 3631.00 IOPS, 14.18 MiB/s 3628.40 IOPS, 14.17 MiB/s 3630.83 IOPS, 14.18 MiB/s 3633.57 IOPS, 14.19 MiB/s 3625.12 IOPS, 14.16 MiB/s 3623.56 IOPS, 14.15 MiB/s 3624.70 IOPS, 14.16 MiB/s 00:25:08.125 Latency(us) 00:25:08.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.125 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:08.125 Verification LBA range: start 0x0 length 0x2000 00:25:08.125 TLSTESTn1 : 10.02 3630.84 14.18 0.00 0.00 35198.92 6310.87 53593.88 00:25:08.125 =================================================================================================================== 00:25:08.125 Total : 3630.84 14.18 0.00 0.00 35198.92 6310.87 53593.88 00:25:08.125 { 00:25:08.125 "results": [ 00:25:08.125 { 00:25:08.125 "job": "TLSTESTn1", 00:25:08.125 "core_mask": "0x4", 00:25:08.125 "workload": "verify", 00:25:08.125 "status": "finished", 00:25:08.125 "verify_range": { 00:25:08.125 "start": 0, 00:25:08.125 "length": 8192 00:25:08.125 }, 00:25:08.125 "queue_depth": 128, 00:25:08.125 "io_size": 4096, 00:25:08.125 "runtime": 10.017793, 00:25:08.125 "iops": 3630.839647016064, 00:25:08.125 "mibps": 14.1829673711565, 00:25:08.125 "io_failed": 0, 00:25:08.125 "io_timeout": 0, 00:25:08.125 "avg_latency_us": 35198.917531950334, 00:25:08.125 "min_latency_us": 6310.874074074074, 00:25:08.125 "max_latency_us": 53593.88444444445 00:25:08.125 } 00:25:08.125 ], 00:25:08.125 "core_count": 1 00:25:08.125 } 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:08.125 nvmf_trace.0 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1579225 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1579225 ']' 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1579225 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579225 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579225' 00:25:08.125 killing process with pid 1579225 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1579225 00:25:08.125 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.125 00:25:08.125 Latency(us) 00:25:08.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.125 =================================================================================================================== 00:25:08.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.125 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1579225 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.692 rmmod nvme_tcp 00:25:08.692 rmmod nvme_fabrics 00:25:08.692 rmmod nvme_keyring 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1579075 ']' 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1579075 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1579075 ']' 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1579075 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579075 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579075' 00:25:08.692 killing process with pid 1579075 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1579075 00:25:08.692 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1579075 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.951 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.pmV 00:25:11.482 00:25:11.482 real 0m18.394s 00:25:11.482 user 0m22.726s 00:25:11.482 sys 0m7.102s 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:11.482 ************************************ 00:25:11.482 END TEST nvmf_fips 00:25:11.482 ************************************ 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.482 ************************************ 00:25:11.482 START TEST nvmf_control_msg_list 00:25:11.482 ************************************ 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:11.482 * Looking for test storage... 00:25:11.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:11.482 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.483 --rc genhtml_branch_coverage=1 00:25:11.483 --rc genhtml_function_coverage=1 00:25:11.483 --rc genhtml_legend=1 00:25:11.483 --rc geninfo_all_blocks=1 00:25:11.483 --rc geninfo_unexecuted_blocks=1 00:25:11.483 00:25:11.483 ' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.483 --rc genhtml_branch_coverage=1 00:25:11.483 --rc genhtml_function_coverage=1 00:25:11.483 --rc genhtml_legend=1 00:25:11.483 --rc geninfo_all_blocks=1 00:25:11.483 --rc geninfo_unexecuted_blocks=1 00:25:11.483 00:25:11.483 ' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.483 --rc genhtml_branch_coverage=1 00:25:11.483 --rc genhtml_function_coverage=1 00:25:11.483 --rc genhtml_legend=1 00:25:11.483 --rc geninfo_all_blocks=1 00:25:11.483 --rc geninfo_unexecuted_blocks=1 00:25:11.483 00:25:11.483 ' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:11.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.483 --rc genhtml_branch_coverage=1 00:25:11.483 --rc genhtml_function_coverage=1 00:25:11.483 --rc genhtml_legend=1 00:25:11.483 --rc geninfo_all_blocks=1 00:25:11.483 --rc geninfo_unexecuted_blocks=1 00:25:11.483 00:25:11.483 ' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:11.483 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.484 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.063 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.063 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.063 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.063 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:14.353 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:14.354 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:14.354 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:14.354 Found net devices under 0000:84:00.0: cvl_0_0 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:14.354 Found net devices under 0000:84:00.1: cvl_0_1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:14.354 00:25:14.354 --- 10.0.0.2 ping statistics --- 00:25:14.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.354 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:14.354 00:25:14.354 --- 10.0.0.1 ping statistics --- 00:25:14.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.354 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.354 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:14.355 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1582632 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1582632 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1582632 ']' 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.355 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.355 [2024-10-07 09:46:09.116995] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:25:14.355 [2024-10-07 09:46:09.117160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.613 [2024-10-07 09:46:09.224089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.613 [2024-10-07 09:46:09.346190] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.613 [2024-10-07 09:46:09.346263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.613 [2024-10-07 09:46:09.346280] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.613 [2024-10-07 09:46:09.346294] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.613 [2024-10-07 09:46:09.346305] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.613 [2024-10-07 09:46:09.346995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 [2024-10-07 09:46:09.522638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 Malloc0 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:14.873 [2024-10-07 09:46:09.574092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1582663 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1582664 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1582665 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1582663 00:25:14.873 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.873 [2024-10-07 09:46:09.632583] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:14.873 [2024-10-07 09:46:09.642552] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:14.873 [2024-10-07 09:46:09.642840] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:16.247 Initializing NVMe Controllers 00:25:16.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:16.247 Initialization complete. Launching workers. 00:25:16.247 ======================================================== 00:25:16.247 Latency(us) 00:25:16.247 Device Information : IOPS MiB/s Average min max 00:25:16.247 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3184.00 12.44 313.39 163.32 772.80 00:25:16.247 ======================================================== 00:25:16.247 Total : 3184.00 12.44 313.39 163.32 772.80 00:25:16.247 00:25:16.247 Initializing NVMe Controllers 00:25:16.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:16.248 Initialization complete. Launching workers. 00:25:16.248 ======================================================== 00:25:16.248 Latency(us) 00:25:16.248 Device Information : IOPS MiB/s Average min max 00:25:16.248 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3348.00 13.08 298.05 175.19 587.18 00:25:16.248 ======================================================== 00:25:16.248 Total : 3348.00 13.08 298.05 175.19 587.18 00:25:16.248 00:25:16.248 Initializing NVMe Controllers 00:25:16.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:16.248 Initialization complete. Launching workers. 00:25:16.248 ======================================================== 00:25:16.248 Latency(us) 00:25:16.248 Device Information : IOPS MiB/s Average min max 00:25:16.248 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3171.00 12.39 314.67 171.87 591.09 00:25:16.248 ======================================================== 00:25:16.248 Total : 3171.00 12.39 314.67 171.87 591.09 00:25:16.248 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1582664 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1582665 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.248 rmmod nvme_tcp 00:25:16.248 rmmod nvme_fabrics 00:25:16.248 rmmod nvme_keyring 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1582632 ']' 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1582632 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1582632 ']' 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1582632 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1582632 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1582632' 00:25:16.248 killing process with pid 1582632 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1582632 00:25:16.248 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1582632 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.506 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.037 00:25:19.037 real 0m7.413s 00:25:19.037 user 0m5.958s 00:25:19.037 sys 0m3.545s 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:19.037 ************************************ 00:25:19.037 END TEST nvmf_control_msg_list 00:25:19.037 ************************************ 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:19.037 ************************************ 00:25:19.037 START TEST nvmf_wait_for_buf 00:25:19.037 ************************************ 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:19.037 * Looking for test storage... 00:25:19.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.037 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:19.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.038 --rc genhtml_branch_coverage=1 00:25:19.038 --rc genhtml_function_coverage=1 00:25:19.038 --rc genhtml_legend=1 00:25:19.038 --rc geninfo_all_blocks=1 00:25:19.038 --rc geninfo_unexecuted_blocks=1 00:25:19.038 00:25:19.038 ' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:19.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.038 --rc genhtml_branch_coverage=1 00:25:19.038 --rc genhtml_function_coverage=1 00:25:19.038 --rc genhtml_legend=1 00:25:19.038 --rc geninfo_all_blocks=1 00:25:19.038 --rc geninfo_unexecuted_blocks=1 00:25:19.038 00:25:19.038 ' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:19.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.038 --rc genhtml_branch_coverage=1 00:25:19.038 --rc genhtml_function_coverage=1 00:25:19.038 --rc genhtml_legend=1 00:25:19.038 --rc geninfo_all_blocks=1 00:25:19.038 --rc geninfo_unexecuted_blocks=1 00:25:19.038 00:25:19.038 ' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:19.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.038 --rc genhtml_branch_coverage=1 00:25:19.038 --rc genhtml_function_coverage=1 00:25:19.038 --rc genhtml_legend=1 00:25:19.038 --rc geninfo_all_blocks=1 00:25:19.038 --rc geninfo_unexecuted_blocks=1 00:25:19.038 00:25:19.038 ' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:19.038 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.039 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.570 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:21.571 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:21.571 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:21.571 Found net devices under 0000:84:00.0: cvl_0_0 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:21.571 Found net devices under 0000:84:00.1: cvl_0_1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.571 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:25:21.830 00:25:21.830 --- 10.0.0.2 ping statistics --- 00:25:21.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.830 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:21.830 00:25:21.830 --- 10.0.0.1 ping statistics --- 00:25:21.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.830 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.830 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1584881 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1584881 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1584881 ']' 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.831 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:21.831 [2024-10-07 09:46:16.524390] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:25:21.831 [2024-10-07 09:46:16.524554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.831 [2024-10-07 09:46:16.627846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.089 [2024-10-07 09:46:16.746967] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.089 [2024-10-07 09:46:16.747017] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.090 [2024-10-07 09:46:16.747049] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.090 [2024-10-07 09:46:16.747063] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.090 [2024-10-07 09:46:16.747074] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.090 [2024-10-07 09:46:16.747752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.090 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 Malloc0 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.348 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 [2024-10-07 09:46:16.997094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.348 [2024-10-07 09:46:17.021277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.348 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:22.349 [2024-10-07 09:46:17.098034] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:23.728 Initializing NVMe Controllers 00:25:23.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:23.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:23.728 Initialization complete. Launching workers. 00:25:23.728 ======================================================== 00:25:23.728 Latency(us) 00:25:23.728 Device Information : IOPS MiB/s Average min max 00:25:23.728 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32231.03 7027.37 63850.89 00:25:23.728 ======================================================== 00:25:23.728 Total : 129.00 16.12 32231.03 7027.37 63850.89 00:25:23.728 00:25:23.728 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:23.728 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:23.728 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.728 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:23.728 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.986 rmmod nvme_tcp 00:25:23.986 rmmod nvme_fabrics 00:25:23.986 rmmod nvme_keyring 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1584881 ']' 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1584881 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1584881 ']' 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1584881 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1584881 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1584881' 00:25:23.986 killing process with pid 1584881 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1584881 00:25:23.986 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1584881 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.245 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.774 00:25:26.774 real 0m7.770s 00:25:26.774 user 0m3.768s 00:25:26.774 sys 0m2.686s 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:26.774 ************************************ 00:25:26.774 END TEST nvmf_wait_for_buf 00:25:26.774 ************************************ 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.774 09:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:29.307 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:29.307 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:29.307 Found net devices under 0000:84:00.0: cvl_0_0 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.307 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:29.308 Found net devices under 0000:84:00.1: cvl_0_1 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:29.308 ************************************ 00:25:29.308 START TEST nvmf_perf_adq 00:25:29.308 ************************************ 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:29.308 * Looking for test storage... 00:25:29.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.308 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:29.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.308 --rc genhtml_branch_coverage=1 00:25:29.308 --rc genhtml_function_coverage=1 00:25:29.308 --rc genhtml_legend=1 00:25:29.308 --rc geninfo_all_blocks=1 00:25:29.308 --rc geninfo_unexecuted_blocks=1 00:25:29.308 00:25:29.308 ' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:29.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.308 --rc genhtml_branch_coverage=1 00:25:29.308 --rc genhtml_function_coverage=1 00:25:29.308 --rc genhtml_legend=1 00:25:29.308 --rc geninfo_all_blocks=1 00:25:29.308 --rc geninfo_unexecuted_blocks=1 00:25:29.308 00:25:29.308 ' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:29.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.308 --rc genhtml_branch_coverage=1 00:25:29.308 --rc genhtml_function_coverage=1 00:25:29.308 --rc genhtml_legend=1 00:25:29.308 --rc geninfo_all_blocks=1 00:25:29.308 --rc geninfo_unexecuted_blocks=1 00:25:29.308 00:25:29.308 ' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:29.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.308 --rc genhtml_branch_coverage=1 00:25:29.308 --rc genhtml_function_coverage=1 00:25:29.308 --rc genhtml_legend=1 00:25:29.308 --rc geninfo_all_blocks=1 00:25:29.308 --rc geninfo_unexecuted_blocks=1 00:25:29.308 00:25:29.308 ' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:29.308 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.309 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:31.847 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.847 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:31.848 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:31.848 Found net devices under 0000:84:00.0: cvl_0_0 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:31.848 Found net devices under 0000:84:00.1: cvl_0_1 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:31.848 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:32.786 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:34.689 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:39.972 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:39.972 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:39.973 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:39.973 Found net devices under 0000:84:00.0: cvl_0_0 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:39.973 Found net devices under 0000:84:00.1: cvl_0_1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:25:39.973 00:25:39.973 --- 10.0.0.2 ping statistics --- 00:25:39.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.973 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:39.973 00:25:39.973 --- 10.0.0.1 ping statistics --- 00:25:39.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.973 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1589756 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1589756 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1589756 ']' 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.973 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.973 [2024-10-07 09:46:34.595094] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:25:39.973 [2024-10-07 09:46:34.595173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.973 [2024-10-07 09:46:34.663096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.973 [2024-10-07 09:46:34.775727] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.973 [2024-10-07 09:46:34.775789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.973 [2024-10-07 09:46:34.775803] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.973 [2024-10-07 09:46:34.775815] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.973 [2024-10-07 09:46:34.775824] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.973 [2024-10-07 09:46:34.777728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.973 [2024-10-07 09:46:34.777787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.973 [2024-10-07 09:46:34.777809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.973 [2024-10-07 09:46:34.777814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.232 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 [2024-10-07 09:46:35.063206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 Malloc1 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 [2024-10-07 09:46:35.116618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1589902 00:25:40.490 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:40.491 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:42.393 "tick_rate": 2700000000, 00:25:42.393 "poll_groups": [ 00:25:42.393 { 00:25:42.393 "name": "nvmf_tgt_poll_group_000", 00:25:42.393 "admin_qpairs": 1, 00:25:42.393 "io_qpairs": 1, 00:25:42.393 "current_admin_qpairs": 1, 00:25:42.393 "current_io_qpairs": 1, 00:25:42.393 "pending_bdev_io": 0, 00:25:42.393 "completed_nvme_io": 18645, 00:25:42.393 "transports": [ 00:25:42.393 { 00:25:42.393 "trtype": "TCP" 00:25:42.393 } 00:25:42.393 ] 00:25:42.393 }, 00:25:42.393 { 00:25:42.393 "name": "nvmf_tgt_poll_group_001", 00:25:42.393 "admin_qpairs": 0, 00:25:42.393 "io_qpairs": 1, 00:25:42.393 "current_admin_qpairs": 0, 00:25:42.393 "current_io_qpairs": 1, 00:25:42.393 "pending_bdev_io": 0, 00:25:42.393 "completed_nvme_io": 18864, 00:25:42.393 "transports": [ 00:25:42.393 { 00:25:42.393 "trtype": "TCP" 00:25:42.393 } 00:25:42.393 ] 00:25:42.393 }, 00:25:42.393 { 00:25:42.393 "name": "nvmf_tgt_poll_group_002", 00:25:42.393 "admin_qpairs": 0, 00:25:42.393 "io_qpairs": 1, 00:25:42.393 "current_admin_qpairs": 0, 00:25:42.393 "current_io_qpairs": 1, 00:25:42.393 "pending_bdev_io": 0, 00:25:42.393 "completed_nvme_io": 19279, 00:25:42.393 "transports": [ 00:25:42.393 { 00:25:42.393 "trtype": "TCP" 00:25:42.393 } 00:25:42.393 ] 00:25:42.393 }, 00:25:42.393 { 00:25:42.393 "name": "nvmf_tgt_poll_group_003", 00:25:42.393 "admin_qpairs": 0, 00:25:42.393 "io_qpairs": 1, 00:25:42.393 "current_admin_qpairs": 0, 00:25:42.393 "current_io_qpairs": 1, 00:25:42.393 "pending_bdev_io": 0, 00:25:42.393 "completed_nvme_io": 18878, 00:25:42.393 "transports": [ 00:25:42.393 { 00:25:42.393 "trtype": "TCP" 00:25:42.393 } 00:25:42.393 ] 00:25:42.393 } 00:25:42.393 ] 00:25:42.393 }' 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:42.393 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1589902 00:25:50.502 Initializing NVMe Controllers 00:25:50.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:50.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:50.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:50.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:50.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:50.502 Initialization complete. Launching workers. 00:25:50.502 ======================================================== 00:25:50.502 Latency(us) 00:25:50.502 Device Information : IOPS MiB/s Average min max 00:25:50.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9950.63 38.87 6431.72 2561.88 10623.46 00:25:50.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10040.73 39.22 6374.00 2684.38 10743.36 00:25:50.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10189.73 39.80 6279.83 2133.32 10522.55 00:25:50.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9995.93 39.05 6403.15 2322.55 10446.54 00:25:50.502 ======================================================== 00:25:50.502 Total : 40177.03 156.94 6371.67 2133.32 10743.36 00:25:50.502 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.502 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.502 rmmod nvme_tcp 00:25:50.502 rmmod nvme_fabrics 00:25:50.761 rmmod nvme_keyring 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1589756 ']' 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1589756 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1589756 ']' 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1589756 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589756 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589756' 00:25:50.761 killing process with pid 1589756 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1589756 00:25:50.761 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1589756 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.020 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.995 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.995 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:52.995 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:52.995 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:53.929 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:55.832 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:01.103 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:01.103 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:01.103 Found net devices under 0000:84:00.0: cvl_0_0 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:01.103 Found net devices under 0000:84:00.1: cvl_0_1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.103 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:26:01.104 00:26:01.104 --- 10.0.0.2 ping statistics --- 00:26:01.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.104 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:26:01.104 00:26:01.104 --- 10.0.0.1 ping statistics --- 00:26:01.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.104 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:01.104 net.core.busy_poll = 1 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:01.104 net.core.busy_read = 1 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1592466 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1592466 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1592466 ']' 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.104 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.104 [2024-10-07 09:46:55.868268] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:01.104 [2024-10-07 09:46:55.868341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.363 [2024-10-07 09:46:55.938137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:01.363 [2024-10-07 09:46:56.055768] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.363 [2024-10-07 09:46:56.055829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.363 [2024-10-07 09:46:56.055845] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.363 [2024-10-07 09:46:56.055859] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.363 [2024-10-07 09:46:56.055880] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.363 [2024-10-07 09:46:56.057768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.363 [2024-10-07 09:46:56.057850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.363 [2024-10-07 09:46:56.057926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.363 [2024-10-07 09:46:56.057931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.363 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 [2024-10-07 09:46:56.345757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 Malloc1 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 [2024-10-07 09:46:56.399010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1592544 00:26:01.621 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:01.622 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:04.153 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:04.153 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.153 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.153 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.153 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:04.153 "tick_rate": 2700000000, 00:26:04.153 "poll_groups": [ 00:26:04.153 { 00:26:04.153 "name": "nvmf_tgt_poll_group_000", 00:26:04.153 "admin_qpairs": 1, 00:26:04.153 "io_qpairs": 2, 00:26:04.153 "current_admin_qpairs": 1, 00:26:04.154 "current_io_qpairs": 2, 00:26:04.154 "pending_bdev_io": 0, 00:26:04.154 "completed_nvme_io": 24643, 00:26:04.154 "transports": [ 00:26:04.154 { 00:26:04.154 "trtype": "TCP" 00:26:04.154 } 00:26:04.154 ] 00:26:04.154 }, 00:26:04.154 { 00:26:04.154 "name": "nvmf_tgt_poll_group_001", 00:26:04.154 "admin_qpairs": 0, 00:26:04.154 "io_qpairs": 2, 00:26:04.154 "current_admin_qpairs": 0, 00:26:04.154 "current_io_qpairs": 2, 00:26:04.154 "pending_bdev_io": 0, 00:26:04.154 "completed_nvme_io": 25229, 00:26:04.154 "transports": [ 00:26:04.154 { 00:26:04.154 "trtype": "TCP" 00:26:04.154 } 00:26:04.154 ] 00:26:04.154 }, 00:26:04.154 { 00:26:04.154 "name": "nvmf_tgt_poll_group_002", 00:26:04.154 "admin_qpairs": 0, 00:26:04.154 "io_qpairs": 0, 00:26:04.154 "current_admin_qpairs": 0, 00:26:04.154 "current_io_qpairs": 0, 00:26:04.154 "pending_bdev_io": 0, 00:26:04.154 "completed_nvme_io": 0, 00:26:04.154 "transports": [ 00:26:04.154 { 00:26:04.154 "trtype": "TCP" 00:26:04.154 } 00:26:04.154 ] 00:26:04.154 }, 00:26:04.154 { 00:26:04.154 "name": "nvmf_tgt_poll_group_003", 00:26:04.154 "admin_qpairs": 0, 00:26:04.154 "io_qpairs": 0, 00:26:04.154 "current_admin_qpairs": 0, 00:26:04.154 "current_io_qpairs": 0, 00:26:04.154 "pending_bdev_io": 0, 00:26:04.154 "completed_nvme_io": 0, 00:26:04.154 "transports": [ 00:26:04.154 { 00:26:04.154 "trtype": "TCP" 00:26:04.154 } 00:26:04.154 ] 00:26:04.154 } 00:26:04.154 ] 00:26:04.154 }' 00:26:04.154 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:04.154 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:04.154 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:04.154 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:04.154 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1592544 00:26:12.263 Initializing NVMe Controllers 00:26:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:12.263 Initialization complete. Launching workers. 00:26:12.263 ======================================================== 00:26:12.263 Latency(us) 00:26:12.263 Device Information : IOPS MiB/s Average min max 00:26:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6204.50 24.24 10352.31 1731.93 54975.46 00:26:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7039.10 27.50 9093.42 1775.74 54860.88 00:26:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6433.90 25.13 9949.05 1844.51 55495.25 00:26:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6722.80 26.26 9518.97 1617.73 55928.84 00:26:12.263 ======================================================== 00:26:12.263 Total : 26400.29 103.13 9706.17 1617.73 55928.84 00:26:12.263 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.263 rmmod nvme_tcp 00:26:12.263 rmmod nvme_fabrics 00:26:12.263 rmmod nvme_keyring 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1592466 ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1592466 ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1592466' 00:26:12.263 killing process with pid 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1592466 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.263 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:14.795 00:26:14.795 real 0m45.240s 00:26:14.795 user 2m41.749s 00:26:14.795 sys 0m10.081s 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 ************************************ 00:26:14.795 END TEST nvmf_perf_adq 00:26:14.795 ************************************ 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 ************************************ 00:26:14.795 START TEST nvmf_shutdown 00:26:14.795 ************************************ 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:14.795 * Looking for test storage... 00:26:14.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:14.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.795 --rc genhtml_branch_coverage=1 00:26:14.795 --rc genhtml_function_coverage=1 00:26:14.795 --rc genhtml_legend=1 00:26:14.795 --rc geninfo_all_blocks=1 00:26:14.795 --rc geninfo_unexecuted_blocks=1 00:26:14.795 00:26:14.795 ' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:14.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.795 --rc genhtml_branch_coverage=1 00:26:14.795 --rc genhtml_function_coverage=1 00:26:14.795 --rc genhtml_legend=1 00:26:14.795 --rc geninfo_all_blocks=1 00:26:14.795 --rc geninfo_unexecuted_blocks=1 00:26:14.795 00:26:14.795 ' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:14.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.795 --rc genhtml_branch_coverage=1 00:26:14.795 --rc genhtml_function_coverage=1 00:26:14.795 --rc genhtml_legend=1 00:26:14.795 --rc geninfo_all_blocks=1 00:26:14.795 --rc geninfo_unexecuted_blocks=1 00:26:14.795 00:26:14.795 ' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:14.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.795 --rc genhtml_branch_coverage=1 00:26:14.795 --rc genhtml_function_coverage=1 00:26:14.795 --rc genhtml_legend=1 00:26:14.795 --rc geninfo_all_blocks=1 00:26:14.795 --rc geninfo_unexecuted_blocks=1 00:26:14.795 00:26:14.795 ' 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:14.795 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:14.796 ************************************ 00:26:14.796 START TEST nvmf_shutdown_tc1 00:26:14.796 ************************************ 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.796 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:17.329 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:17.329 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:17.329 Found net devices under 0000:84:00.0: cvl_0_0 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:17.329 Found net devices under 0000:84:00.1: cvl_0_1 00:26:17.329 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:26:17.330 00:26:17.330 --- 10.0.0.2 ping statistics --- 00:26:17.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.330 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:26:17.330 00:26:17.330 --- 10.0.0.1 ping statistics --- 00:26:17.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.330 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1595726 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1595726 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1595726 ']' 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.330 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.330 [2024-10-07 09:47:11.920553] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:17.330 [2024-10-07 09:47:11.920634] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.330 [2024-10-07 09:47:11.987639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.330 [2024-10-07 09:47:12.102077] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.330 [2024-10-07 09:47:12.102137] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.330 [2024-10-07 09:47:12.102150] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.330 [2024-10-07 09:47:12.102160] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.330 [2024-10-07 09:47:12.102183] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.330 [2024-10-07 09:47:12.103919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.330 [2024-10-07 09:47:12.104011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.330 [2024-10-07 09:47:12.104091] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:17.330 [2024-10-07 09:47:12.104095] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.588 [2024-10-07 09:47:12.267622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.588 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.588 Malloc1 00:26:17.588 [2024-10-07 09:47:12.346659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.588 Malloc2 00:26:17.846 Malloc3 00:26:17.846 Malloc4 00:26:17.846 Malloc5 00:26:17.846 Malloc6 00:26:17.846 Malloc7 00:26:18.105 Malloc8 00:26:18.105 Malloc9 00:26:18.105 Malloc10 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1595907 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1595907 /var/tmp/bdevperf.sock 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1595907 ']' 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:18.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.105 { 00:26:18.105 "params": { 00:26:18.105 "name": "Nvme$subsystem", 00:26:18.105 "trtype": "$TEST_TRANSPORT", 00:26:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.105 "adrfam": "ipv4", 00:26:18.105 "trsvcid": "$NVMF_PORT", 00:26:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.105 "hdgst": ${hdgst:-false}, 00:26:18.105 "ddgst": ${ddgst:-false} 00:26:18.105 }, 00:26:18.105 "method": "bdev_nvme_attach_controller" 00:26:18.105 } 00:26:18.105 EOF 00:26:18.105 )") 00:26:18.105 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.106 { 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme$subsystem", 00:26:18.106 "trtype": "$TEST_TRANSPORT", 00:26:18.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "$NVMF_PORT", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.106 "hdgst": ${hdgst:-false}, 00:26:18.106 "ddgst": ${ddgst:-false} 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 } 00:26:18.106 EOF 00:26:18.106 )") 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.106 { 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme$subsystem", 00:26:18.106 "trtype": "$TEST_TRANSPORT", 00:26:18.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "$NVMF_PORT", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.106 "hdgst": ${hdgst:-false}, 00:26:18.106 "ddgst": ${ddgst:-false} 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 } 00:26:18.106 EOF 00:26:18.106 )") 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:18.106 { 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme$subsystem", 00:26:18.106 "trtype": "$TEST_TRANSPORT", 00:26:18.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "$NVMF_PORT", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.106 "hdgst": ${hdgst:-false}, 00:26:18.106 "ddgst": ${ddgst:-false} 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 } 00:26:18.106 EOF 00:26:18.106 )") 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:18.106 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme1", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme2", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme3", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme4", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme5", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme6", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme7", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme8", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme9", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 },{ 00:26:18.106 "params": { 00:26:18.106 "name": "Nvme10", 00:26:18.106 "trtype": "tcp", 00:26:18.106 "traddr": "10.0.0.2", 00:26:18.106 "adrfam": "ipv4", 00:26:18.106 "trsvcid": "4420", 00:26:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:18.106 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:18.106 "hdgst": false, 00:26:18.106 "ddgst": false 00:26:18.106 }, 00:26:18.106 "method": "bdev_nvme_attach_controller" 00:26:18.106 }' 00:26:18.106 [2024-10-07 09:47:12.851645] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:18.106 [2024-10-07 09:47:12.851733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:18.106 [2024-10-07 09:47:12.915008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.364 [2024-10-07 09:47:13.025919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1595907 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:20.261 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:21.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1595907 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1595726 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.635 "hdgst": ${hdgst:-false}, 00:26:21.635 "ddgst": ${ddgst:-false} 00:26:21.635 }, 00:26:21.635 "method": "bdev_nvme_attach_controller" 00:26:21.635 } 00:26:21.635 EOF 00:26:21.635 )") 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.635 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.635 { 00:26:21.635 "params": { 00:26:21.635 "name": "Nvme$subsystem", 00:26:21.635 "trtype": "$TEST_TRANSPORT", 00:26:21.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.635 "adrfam": "ipv4", 00:26:21.635 "trsvcid": "$NVMF_PORT", 00:26:21.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.636 "hdgst": ${hdgst:-false}, 00:26:21.636 "ddgst": ${ddgst:-false} 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 } 00:26:21.636 EOF 00:26:21.636 )") 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.636 { 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme$subsystem", 00:26:21.636 "trtype": "$TEST_TRANSPORT", 00:26:21.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "$NVMF_PORT", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.636 "hdgst": ${hdgst:-false}, 00:26:21.636 "ddgst": ${ddgst:-false} 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 } 00:26:21.636 EOF 00:26:21.636 )") 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:21.636 { 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme$subsystem", 00:26:21.636 "trtype": "$TEST_TRANSPORT", 00:26:21.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "$NVMF_PORT", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.636 "hdgst": ${hdgst:-false}, 00:26:21.636 "ddgst": ${ddgst:-false} 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 } 00:26:21.636 EOF 00:26:21.636 )") 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:21.636 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme1", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme2", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme3", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme4", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme5", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme6", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme7", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme8", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme9", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 },{ 00:26:21.636 "params": { 00:26:21.636 "name": "Nvme10", 00:26:21.636 "trtype": "tcp", 00:26:21.636 "traddr": "10.0.0.2", 00:26:21.636 "adrfam": "ipv4", 00:26:21.636 "trsvcid": "4420", 00:26:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:21.636 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:21.636 "hdgst": false, 00:26:21.636 "ddgst": false 00:26:21.636 }, 00:26:21.636 "method": "bdev_nvme_attach_controller" 00:26:21.636 }' 00:26:21.636 [2024-10-07 09:47:16.077880] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:21.636 [2024-10-07 09:47:16.077998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596329 ] 00:26:21.636 [2024-10-07 09:47:16.152631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.636 [2024-10-07 09:47:16.264502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.009 Running I/O for 1 seconds... 00:26:24.199 1672.00 IOPS, 104.50 MiB/s 00:26:24.199 Latency(us) 00:26:24.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.199 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme1n1 : 1.12 227.65 14.23 0.00 0.00 275375.41 20874.43 257872.02 00:26:24.199 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme2n1 : 1.05 186.95 11.68 0.00 0.00 324126.22 5097.24 267192.70 00:26:24.199 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme3n1 : 1.12 228.24 14.27 0.00 0.00 266803.58 28738.75 256318.58 00:26:24.199 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme4n1 : 1.13 226.40 14.15 0.00 0.00 264860.25 18058.81 262532.36 00:26:24.199 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme5n1 : 1.18 217.23 13.58 0.00 0.00 271865.55 22330.79 284280.60 00:26:24.199 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme6n1 : 1.14 224.47 14.03 0.00 0.00 257660.78 35340.89 248551.35 00:26:24.199 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.199 Verification LBA range: start 0x0 length 0x400 00:26:24.199 Nvme7n1 : 1.14 227.18 14.20 0.00 0.00 248469.01 8252.68 262532.36 00:26:24.199 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.200 Verification LBA range: start 0x0 length 0x400 00:26:24.200 Nvme8n1 : 1.14 223.91 13.99 0.00 0.00 249237.24 20874.43 265639.25 00:26:24.200 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.200 Verification LBA range: start 0x0 length 0x400 00:26:24.200 Nvme9n1 : 1.16 221.26 13.83 0.00 0.00 247966.72 21748.24 270299.59 00:26:24.200 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.200 Verification LBA range: start 0x0 length 0x400 00:26:24.200 Nvme10n1 : 1.20 267.05 16.69 0.00 0.00 202678.84 5364.24 295154.73 00:26:24.200 =================================================================================================================== 00:26:24.200 Total : 2250.33 140.65 0.00 0.00 257982.50 5097.24 295154.73 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.457 rmmod nvme_tcp 00:26:24.457 rmmod nvme_fabrics 00:26:24.457 rmmod nvme_keyring 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1595726 ']' 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1595726 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1595726 ']' 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1595726 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595726 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595726' 00:26:24.457 killing process with pid 1595726 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1595726 00:26:24.457 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1595726 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.389 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.288 00:26:27.288 real 0m12.622s 00:26:27.288 user 0m36.002s 00:26:27.288 sys 0m3.640s 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:27.288 ************************************ 00:26:27.288 END TEST nvmf_shutdown_tc1 00:26:27.288 ************************************ 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.288 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:27.288 ************************************ 00:26:27.288 START TEST nvmf_shutdown_tc2 00:26:27.288 ************************************ 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:27.288 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.288 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:27.289 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:27.289 Found net devices under 0000:84:00.0: cvl_0_0 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:27.289 Found net devices under 0000:84:00.1: cvl_0_1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.289 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:26:27.548 00:26:27.548 --- 10.0.0.2 ping statistics --- 00:26:27.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.548 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:26:27.548 00:26:27.548 --- 10.0.0.1 ping statistics --- 00:26:27.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.548 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1597092 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1597092 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1597092 ']' 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.548 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.548 [2024-10-07 09:47:22.275062] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:27.549 [2024-10-07 09:47:22.275161] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.807 [2024-10-07 09:47:22.376668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.807 [2024-10-07 09:47:22.555150] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.807 [2024-10-07 09:47:22.555219] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.807 [2024-10-07 09:47:22.555236] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.807 [2024-10-07 09:47:22.555249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.807 [2024-10-07 09:47:22.555261] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.807 [2024-10-07 09:47:22.557936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.807 [2024-10-07 09:47:22.558014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.807 [2024-10-07 09:47:22.558045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:27.807 [2024-10-07 09:47:22.558048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 [2024-10-07 09:47:22.736532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.065 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 Malloc1 00:26:28.065 [2024-10-07 09:47:22.819810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.065 Malloc2 00:26:28.325 Malloc3 00:26:28.325 Malloc4 00:26:28.325 Malloc5 00:26:28.325 Malloc6 00:26:28.325 Malloc7 00:26:28.325 Malloc8 00:26:28.583 Malloc9 00:26:28.583 Malloc10 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1597271 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1597271 /var/tmp/bdevperf.sock 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1597271 ']' 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.583 { 00:26:28.583 "params": { 00:26:28.583 "name": "Nvme$subsystem", 00:26:28.583 "trtype": "$TEST_TRANSPORT", 00:26:28.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.583 "adrfam": "ipv4", 00:26:28.583 "trsvcid": "$NVMF_PORT", 00:26:28.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.583 "hdgst": ${hdgst:-false}, 00:26:28.583 "ddgst": ${ddgst:-false} 00:26:28.583 }, 00:26:28.583 "method": "bdev_nvme_attach_controller" 00:26:28.583 } 00:26:28.583 EOF 00:26:28.583 )") 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.583 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.583 { 00:26:28.583 "params": { 00:26:28.583 "name": "Nvme$subsystem", 00:26:28.583 "trtype": "$TEST_TRANSPORT", 00:26:28.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.583 "adrfam": "ipv4", 00:26:28.583 "trsvcid": "$NVMF_PORT", 00:26:28.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.584 { 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme$subsystem", 00:26:28.584 "trtype": "$TEST_TRANSPORT", 00:26:28.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "$NVMF_PORT", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.584 "hdgst": ${hdgst:-false}, 00:26:28.584 "ddgst": ${ddgst:-false} 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 } 00:26:28.584 EOF 00:26:28.584 )") 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:26:28.584 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme1", 00:26:28.584 "trtype": "tcp", 00:26:28.584 "traddr": "10.0.0.2", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "4420", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.584 "hdgst": false, 00:26:28.584 "ddgst": false 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 },{ 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme2", 00:26:28.584 "trtype": "tcp", 00:26:28.584 "traddr": "10.0.0.2", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "4420", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:28.584 "hdgst": false, 00:26:28.584 "ddgst": false 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 },{ 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme3", 00:26:28.584 "trtype": "tcp", 00:26:28.584 "traddr": "10.0.0.2", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "4420", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:28.584 "hdgst": false, 00:26:28.584 "ddgst": false 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 },{ 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme4", 00:26:28.584 "trtype": "tcp", 00:26:28.584 "traddr": "10.0.0.2", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "4420", 00:26:28.584 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:28.584 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:28.584 "hdgst": false, 00:26:28.584 "ddgst": false 00:26:28.584 }, 00:26:28.584 "method": "bdev_nvme_attach_controller" 00:26:28.584 },{ 00:26:28.584 "params": { 00:26:28.584 "name": "Nvme5", 00:26:28.584 "trtype": "tcp", 00:26:28.584 "traddr": "10.0.0.2", 00:26:28.584 "adrfam": "ipv4", 00:26:28.584 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 },{ 00:26:28.585 "params": { 00:26:28.585 "name": "Nvme6", 00:26:28.585 "trtype": "tcp", 00:26:28.585 "traddr": "10.0.0.2", 00:26:28.585 "adrfam": "ipv4", 00:26:28.585 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 },{ 00:26:28.585 "params": { 00:26:28.585 "name": "Nvme7", 00:26:28.585 "trtype": "tcp", 00:26:28.585 "traddr": "10.0.0.2", 00:26:28.585 "adrfam": "ipv4", 00:26:28.585 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 },{ 00:26:28.585 "params": { 00:26:28.585 "name": "Nvme8", 00:26:28.585 "trtype": "tcp", 00:26:28.585 "traddr": "10.0.0.2", 00:26:28.585 "adrfam": "ipv4", 00:26:28.585 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 },{ 00:26:28.585 "params": { 00:26:28.585 "name": "Nvme9", 00:26:28.585 "trtype": "tcp", 00:26:28.585 "traddr": "10.0.0.2", 00:26:28.585 "adrfam": "ipv4", 00:26:28.585 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 },{ 00:26:28.585 "params": { 00:26:28.585 "name": "Nvme10", 00:26:28.585 "trtype": "tcp", 00:26:28.585 "traddr": "10.0.0.2", 00:26:28.585 "adrfam": "ipv4", 00:26:28.585 "trsvcid": "4420", 00:26:28.585 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:28.585 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:28.585 "hdgst": false, 00:26:28.585 "ddgst": false 00:26:28.585 }, 00:26:28.585 "method": "bdev_nvme_attach_controller" 00:26:28.585 }' 00:26:28.585 [2024-10-07 09:47:23.353351] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:28.585 [2024-10-07 09:47:23.353440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597271 ] 00:26:28.843 [2024-10-07 09:47:23.422220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.843 [2024-10-07 09:47:23.534452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.792 Running I/O for 10 seconds... 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:31.050 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.051 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.051 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.051 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:31.051 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:31.051 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.308 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1597271 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1597271 ']' 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1597271 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597271 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597271' 00:26:31.567 killing process with pid 1597271 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1597271 00:26:31.567 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1597271 00:26:31.567 Received shutdown signal, test time was about 0.850626 seconds 00:26:31.567 00:26:31.567 Latency(us) 00:26:31.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme1n1 : 0.83 232.01 14.50 0.00 0.00 270805.27 38253.61 223696.21 00:26:31.567 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme2n1 : 0.84 235.45 14.72 0.00 0.00 260376.44 2451.53 228356.55 00:26:31.567 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme3n1 : 0.82 234.67 14.67 0.00 0.00 255070.31 20486.07 260978.92 00:26:31.567 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme4n1 : 0.82 239.12 14.94 0.00 0.00 242149.33 7378.87 248551.35 00:26:31.567 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme5n1 : 0.85 225.93 14.12 0.00 0.00 253074.14 21262.79 273406.48 00:26:31.567 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme6n1 : 0.85 226.93 14.18 0.00 0.00 245359.76 24175.50 276513.37 00:26:31.567 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme7n1 : 0.84 228.31 14.27 0.00 0.00 237300.75 20000.62 253211.69 00:26:31.567 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme8n1 : 0.83 230.84 14.43 0.00 0.00 227815.47 21359.88 254765.13 00:26:31.567 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme9n1 : 0.81 158.28 9.89 0.00 0.00 318570.95 40777.96 285834.05 00:26:31.567 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.567 Verification LBA range: start 0x0 length 0x400 00:26:31.567 Nvme10n1 : 0.82 175.35 10.96 0.00 0.00 275110.79 8495.41 299815.06 00:26:31.567 =================================================================================================================== 00:26:31.567 Total : 2186.87 136.68 0.00 0.00 255981.40 2451.53 299815.06 00:26:31.826 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1597092 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.757 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.757 rmmod nvme_tcp 00:26:32.757 rmmod nvme_fabrics 00:26:33.014 rmmod nvme_keyring 00:26:33.014 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.014 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1597092 ']' 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1597092 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1597092 ']' 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1597092 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597092 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597092' 00:26:33.015 killing process with pid 1597092 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1597092 00:26:33.015 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1597092 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.584 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:36.120 00:26:36.120 real 0m8.306s 00:26:36.120 user 0m25.847s 00:26:36.120 sys 0m1.625s 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.120 ************************************ 00:26:36.120 END TEST nvmf_shutdown_tc2 00:26:36.120 ************************************ 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:36.120 ************************************ 00:26:36.120 START TEST nvmf_shutdown_tc3 00:26:36.120 ************************************ 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.120 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:36.121 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:36.121 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:36.121 Found net devices under 0000:84:00.0: cvl_0_0 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:36.121 Found net devices under 0000:84:00.1: cvl_0_1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:36.121 00:26:36.121 --- 10.0.0.2 ping statistics --- 00:26:36.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.121 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:26:36.121 00:26:36.121 --- 10.0.0.1 ping statistics --- 00:26:36.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.121 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:36.121 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1598190 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1598190 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1598190 ']' 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.122 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.122 [2024-10-07 09:47:30.651395] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:36.122 [2024-10-07 09:47:30.651491] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.122 [2024-10-07 09:47:30.732696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.122 [2024-10-07 09:47:30.853740] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.122 [2024-10-07 09:47:30.853808] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.122 [2024-10-07 09:47:30.853825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.122 [2024-10-07 09:47:30.853839] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.122 [2024-10-07 09:47:30.853851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.122 [2024-10-07 09:47:30.855885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.122 [2024-10-07 09:47:30.855955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.122 [2024-10-07 09:47:30.856034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:36.122 [2024-10-07 09:47:30.856037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.380 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.380 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:36.380 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:36.380 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.380 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 [2024-10-07 09:47:31.028951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.380 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.381 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.381 Malloc1 00:26:36.381 [2024-10-07 09:47:31.126540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.381 Malloc2 00:26:36.638 Malloc3 00:26:36.638 Malloc4 00:26:36.638 Malloc5 00:26:36.638 Malloc6 00:26:36.638 Malloc7 00:26:36.897 Malloc8 00:26:36.897 Malloc9 00:26:36.897 Malloc10 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1598370 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1598370 /var/tmp/bdevperf.sock 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1598370 ']' 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.897 )") 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.897 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.897 { 00:26:36.897 "params": { 00:26:36.897 "name": "Nvme$subsystem", 00:26:36.897 "trtype": "$TEST_TRANSPORT", 00:26:36.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.897 "adrfam": "ipv4", 00:26:36.897 "trsvcid": "$NVMF_PORT", 00:26:36.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.897 "hdgst": ${hdgst:-false}, 00:26:36.897 "ddgst": ${ddgst:-false} 00:26:36.897 }, 00:26:36.897 "method": "bdev_nvme_attach_controller" 00:26:36.897 } 00:26:36.897 EOF 00:26:36.898 )") 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.898 { 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme$subsystem", 00:26:36.898 "trtype": "$TEST_TRANSPORT", 00:26:36.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "$NVMF_PORT", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.898 "hdgst": ${hdgst:-false}, 00:26:36.898 "ddgst": ${ddgst:-false} 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 } 00:26:36.898 EOF 00:26:36.898 )") 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.898 { 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme$subsystem", 00:26:36.898 "trtype": "$TEST_TRANSPORT", 00:26:36.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "$NVMF_PORT", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.898 "hdgst": ${hdgst:-false}, 00:26:36.898 "ddgst": ${ddgst:-false} 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 } 00:26:36.898 EOF 00:26:36.898 )") 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.898 { 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme$subsystem", 00:26:36.898 "trtype": "$TEST_TRANSPORT", 00:26:36.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "$NVMF_PORT", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.898 "hdgst": ${hdgst:-false}, 00:26:36.898 "ddgst": ${ddgst:-false} 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 } 00:26:36.898 EOF 00:26:36.898 )") 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:26:36.898 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme1", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme2", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme3", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme4", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme5", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme6", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme7", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme8", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme9", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 },{ 00:26:36.898 "params": { 00:26:36.898 "name": "Nvme10", 00:26:36.898 "trtype": "tcp", 00:26:36.898 "traddr": "10.0.0.2", 00:26:36.898 "adrfam": "ipv4", 00:26:36.898 "trsvcid": "4420", 00:26:36.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.898 "hdgst": false, 00:26:36.898 "ddgst": false 00:26:36.898 }, 00:26:36.898 "method": "bdev_nvme_attach_controller" 00:26:36.898 }' 00:26:36.898 [2024-10-07 09:47:31.681312] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:36.898 [2024-10-07 09:47:31.681411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598370 ] 00:26:37.157 [2024-10-07 09:47:31.754187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.157 [2024-10-07 09:47:31.870661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.056 Running I/O for 10 seconds... 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:39.315 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.574 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1598190 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1598190 ']' 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1598190 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1598190 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1598190' 00:26:39.855 killing process with pid 1598190 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1598190 00:26:39.855 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1598190 00:26:39.855 [2024-10-07 09:47:34.463651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.855 [2024-10-07 09:47:34.463779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.855 [2024-10-07 09:47:34.463804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.463998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.464618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3c70 is same with the state(6) to be set 00:26:39.856 [2024-10-07 09:47:34.466211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.856 [2024-10-07 09:47:34.466250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.856 [2024-10-07 09:47:34.466268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.856 [2024-10-07 09:47:34.466281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.856 [2024-10-07 09:47:34.466295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.856 [2024-10-07 09:47:34.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.466323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.857 [2024-10-07 09:47:34.466336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.466350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44f70 is same with the state(6) to be set 00:26:39.857 [2024-10-07 09:47:34.467250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.467985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.467999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.468014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.468028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.468058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.468073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.468087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.468103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.468121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.857 [2024-10-07 09:47:34.468137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.857 [2024-10-07 09:47:34.468151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.468708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1he state(6) to be set 00:26:39.858 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1[2024-10-07 09:47:34.468770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 he state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.468785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 [2024-10-07 09:47:34.468827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1[2024-10-07 09:47:34.468840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 he state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.468855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.858 he state(6) to be set 00:26:39.858 [2024-10-07 09:47:34.468887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.468888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1he state(6) to be set 00:26:39.858 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.858 [2024-10-07 09:47:34.468911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.468912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.468926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.468938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.468951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-10-07 09:47:34.468963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 he state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.468978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.468993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1[2024-10-07 09:47:34.469093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 he state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.469107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.859 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.469217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 he state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.859 [2024-10-07 09:47:34.469270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.859 [2024-10-07 09:47:34.469282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with t[2024-10-07 09:47:34.469363] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x214bc10 was disconnected and frhe state(6) to be set 00:26:39.859 eed. reset controller. 00:26:39.859 [2024-10-07 09:47:34.469381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.859 [2024-10-07 09:47:34.469466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.469586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4160 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.471150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:39.860 [2024-10-07 09:47:34.471256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.860 [2024-10-07 09:47:34.473036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.860 [2024-10-07 09:47:34.473067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396130 with addr=10.0.0.2, port=4420 00:26:39.860 [2024-10-07 09:47:34.473085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396130 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.473155] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.860 [2024-10-07 09:47:34.473703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.860 [2024-10-07 09:47:34.474299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:39.860 [2024-10-07 09:47:34.474322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:39.860 [2024-10-07 09:47:34.474339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:39.860 [2024-10-07 09:47:34.474417] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.860 [2024-10-07 09:47:34.474864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.860 [2024-10-07 09:47:34.475490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.475988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.860 [2024-10-07 09:47:34.476133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4630 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.476906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.476938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.476956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.476970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.476985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.476998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.477026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41010 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44f70 (9): Bad file descriptor 00:26:39.861 [2024-10-07 09:47:34.477134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.477184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.477212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.861 [2024-10-07 09:47:34.477240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.861 [2024-10-07 09:47:34.477254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b9f0 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.861 [2024-10-07 09:47:34.477837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.477996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.478582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4b20 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479253] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.862 [2024-10-07 09:47:34.479551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.862 [2024-10-07 09:47:34.479736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.479983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.480366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc4ff0 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481140] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.863 [2024-10-07 09:47:34.481655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.863 [2024-10-07 09:47:34.481847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.481989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5370 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.482619] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.864 [2024-10-07 09:47:34.482857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:39.864 [2024-10-07 09:47:34.483245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.864 [2024-10-07 09:47:34.483274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396130 with addr=10.0.0.2, port=4420 00:26:39.864 [2024-10-07 09:47:34.483292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396130 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.483591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.864 [2024-10-07 09:47:34.483923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:39.864 [2024-10-07 09:47:34.483946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:39.864 [2024-10-07 09:47:34.483962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:39.864 [2024-10-07 09:47:34.484275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.864 [2024-10-07 09:47:34.484368] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.864 [2024-10-07 09:47:34.484530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.864 [2024-10-07 09:47:34.484670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.484993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.485340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5840 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.865 [2024-10-07 09:47:34.486489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc5d30 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.486976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.486990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t[2024-10-07 09:47:34.487044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12he state(6) to be set 00:26:39.866 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t[2024-10-07 09:47:34.487063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.866 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-10-07 09:47:34.487115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 he state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.487128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 he state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.487193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 he state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.866 [2024-10-07 09:47:34.487275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.866 [2024-10-07 09:47:34.487288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.866 [2024-10-07 09:47:34.487299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12[2024-10-07 09:47:34.487408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 he state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t[2024-10-07 09:47:34.487422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.867 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-10-07 09:47:34.487476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 he state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t[2024-10-07 09:47:34.487490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.867 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.487552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 he state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 09:47:34.487613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 he state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.867 [2024-10-07 09:47:34.487653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.867 [2024-10-07 09:47:34.487661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.867 [2024-10-07 09:47:34.487665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-10-07 09:47:34.487750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 he state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with t[2024-10-07 09:47:34.487764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:39.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6200 is same with the state(6) to be set 00:26:39.868 [2024-10-07 09:47:34.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.487973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.487989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.868 [2024-10-07 09:47:34.488520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.868 [2024-10-07 09:47:34.488536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.869 [2024-10-07 09:47:34.488731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.488745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23490b0 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.488828] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23490b0 was disconnected and freed. reset controller. 00:26:39.869 [2024-10-07 09:47:34.489032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236faf0 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.489216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23689f0 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.489385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ead380 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.489545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ff0 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.489711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39990 is same with the state(6) to be set 00:26:39.869 [2024-10-07 09:47:34.489853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f41010 (9): Bad file descriptor 00:26:39.869 [2024-10-07 09:47:34.489923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.869 [2024-10-07 09:47:34.489946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.869 [2024-10-07 09:47:34.489962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.870 [2024-10-07 09:47:34.489975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.489989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.870 [2024-10-07 09:47:34.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.490016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.870 [2024-10-07 09:47:34.490029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.490042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3aa60 is same with the state(6) to be set 00:26:39.870 [2024-10-07 09:47:34.490078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b9f0 (9): Bad file descriptor 00:26:39.870 [2024-10-07 09:47:34.491438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:39.870 [2024-10-07 09:47:34.491472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39990 (9): Bad file descriptor 00:26:39.870 [2024-10-07 09:47:34.491551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.491979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.491993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.870 [2024-10-07 09:47:34.492345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.870 [2024-10-07 09:47:34.492359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.492974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.492988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.871 [2024-10-07 09:47:34.493323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.871 [2024-10-07 09:47:34.493339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.493533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.493548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149070 is same with the state(6) to be set 00:26:39.872 [2024-10-07 09:47:34.494959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.872 [2024-10-07 09:47:34.495737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.872 [2024-10-07 09:47:34.495769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39990 with addr=10.0.0.2, port=4420 00:26:39.872 [2024-10-07 09:47:34.495787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39990 is same with the state(6) to be set 00:26:39.872 [2024-10-07 09:47:34.495933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.872 [2024-10-07 09:47:34.495967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f44f70 with addr=10.0.0.2, port=4420 00:26:39.872 [2024-10-07 09:47:34.495984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44f70 is same with the state(6) to be set 00:26:39.872 [2024-10-07 09:47:34.496350] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.872 [2024-10-07 09:47:34.496440] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:39.872 [2024-10-07 09:47:34.496469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:39.872 [2024-10-07 09:47:34.496519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39990 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.496542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44f70 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.496726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.872 [2024-10-07 09:47:34.496754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396130 with addr=10.0.0.2, port=4420 00:26:39.872 [2024-10-07 09:47:34.496770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396130 is same with the state(6) to be set 00:26:39.872 [2024-10-07 09:47:34.496786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:39.872 [2024-10-07 09:47:34.496799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:39.872 [2024-10-07 09:47:34.496815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:39.872 [2024-10-07 09:47:34.496836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.872 [2024-10-07 09:47:34.496851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.872 [2024-10-07 09:47:34.496864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.872 [2024-10-07 09:47:34.496943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.872 [2024-10-07 09:47:34.496965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.872 [2024-10-07 09:47:34.496986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.497040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:39.872 [2024-10-07 09:47:34.497059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:39.872 [2024-10-07 09:47:34.497073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:39.872 [2024-10-07 09:47:34.497129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.872 [2024-10-07 09:47:34.499042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236faf0 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.499080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23689f0 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.499114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ead380 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.499148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0ff0 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.499197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3aa60 (9): Bad file descriptor 00:26:39.872 [2024-10-07 09:47:34.499360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.872 [2024-10-07 09:47:34.499561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.872 [2024-10-07 09:47:34.499578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.499975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.499991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.873 [2024-10-07 09:47:34.500480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.873 [2024-10-07 09:47:34.500495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.500990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.501404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.501420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214a250 is same with the state(6) to be set 00:26:39.874 [2024-10-07 09:47:34.502708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.874 [2024-10-07 09:47:34.502744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.874 [2024-10-07 09:47:34.502765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.502974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.502991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.875 [2024-10-07 09:47:34.503717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.875 [2024-10-07 09:47:34.503733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.503989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.876 [2024-10-07 09:47:34.504696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.876 [2024-10-07 09:47:34.504711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.504727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.504742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.504756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2337e50 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.506032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:39.877 [2024-10-07 09:47:34.506064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:39.877 [2024-10-07 09:47:34.506458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.877 [2024-10-07 09:47:34.506491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3b9f0 with addr=10.0.0.2, port=4420 00:26:39.877 [2024-10-07 09:47:34.506515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b9f0 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.506627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.877 [2024-10-07 09:47:34.506653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f41010 with addr=10.0.0.2, port=4420 00:26:39.877 [2024-10-07 09:47:34.506670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41010 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.507303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.877 [2024-10-07 09:47:34.507331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:39.877 [2024-10-07 09:47:34.507372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b9f0 (9): Bad file descriptor 00:26:39.877 [2024-10-07 09:47:34.507395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f41010 (9): Bad file descriptor 00:26:39.877 [2024-10-07 09:47:34.507664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.877 [2024-10-07 09:47:34.507693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f44f70 with addr=10.0.0.2, port=4420 00:26:39.877 [2024-10-07 09:47:34.507710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44f70 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.507830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.877 [2024-10-07 09:47:34.507857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39990 with addr=10.0.0.2, port=4420 00:26:39.877 [2024-10-07 09:47:34.507874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39990 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.507898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:39.877 [2024-10-07 09:47:34.507914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:39.877 [2024-10-07 09:47:34.507930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:39.877 [2024-10-07 09:47:34.507950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:39.877 [2024-10-07 09:47:34.507966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:39.877 [2024-10-07 09:47:34.507989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:39.877 [2024-10-07 09:47:34.508063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:39.877 [2024-10-07 09:47:34.508099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.877 [2024-10-07 09:47:34.508115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.877 [2024-10-07 09:47:34.508141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44f70 (9): Bad file descriptor 00:26:39.877 [2024-10-07 09:47:34.508164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39990 (9): Bad file descriptor 00:26:39.877 [2024-10-07 09:47:34.508389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.877 [2024-10-07 09:47:34.508418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396130 with addr=10.0.0.2, port=4420 00:26:39.877 [2024-10-07 09:47:34.508435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396130 is same with the state(6) to be set 00:26:39.877 [2024-10-07 09:47:34.508450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.877 [2024-10-07 09:47:34.508463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.877 [2024-10-07 09:47:34.508477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.877 [2024-10-07 09:47:34.508498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:39.877 [2024-10-07 09:47:34.508513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:39.877 [2024-10-07 09:47:34.508526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:39.877 [2024-10-07 09:47:34.508575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.877 [2024-10-07 09:47:34.508594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.877 [2024-10-07 09:47:34.508611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.877 [2024-10-07 09:47:34.508663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:39.877 [2024-10-07 09:47:34.508681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:39.877 [2024-10-07 09:47:34.508695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:39.877 [2024-10-07 09:47:34.508752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.877 [2024-10-07 09:47:34.509197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.877 [2024-10-07 09:47:34.509514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.877 [2024-10-07 09:47:34.509529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.509971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.509986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.878 [2024-10-07 09:47:34.510321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.878 [2024-10-07 09:47:34.510335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.510953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.510986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.511274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.511293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345180 is same with the state(6) to be set 00:26:39.879 [2024-10-07 09:47:34.512549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.512595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.512611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.879 [2024-10-07 09:47:34.512633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.879 [2024-10-07 09:47:34.512648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.512980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.512994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.880 [2024-10-07 09:47:34.513645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.880 [2024-10-07 09:47:34.513659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.513978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.513994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.514556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.514571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346650 is same with the state(6) to be set 00:26:39.881 [2024-10-07 09:47:34.515809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.515833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.515853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.515869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.881 [2024-10-07 09:47:34.515885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.881 [2024-10-07 09:47:34.515907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.515926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.515940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.515956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.515971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.515987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.882 [2024-10-07 09:47:34.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.882 [2024-10-07 09:47:34.516940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.516970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.516987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.517830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.517845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347b80 is same with the state(6) to be set 00:26:39.883 [2024-10-07 09:47:34.519079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.519106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.519128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.883 [2024-10-07 09:47:34.519144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.883 [2024-10-07 09:47:34.519160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.519665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.519680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a5e0 is same with the state(6) to be set 00:26:39.884 [2024-10-07 09:47:34.520748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.520979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.521011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.521025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.521042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.521056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.521074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.521088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.521105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.884 [2024-10-07 09:47:34.521119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.884 [2024-10-07 09:47:34.521137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.521977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.521992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.522008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.522039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.522054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.522070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.522084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.522101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.522115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.885 [2024-10-07 09:47:34.522132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.885 [2024-10-07 09:47:34.522151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.886 [2024-10-07 09:47:34.522802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.886 [2024-10-07 09:47:34.522816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234bab0 is same with the state(6) to be set 00:26:39.886 [2024-10-07 09:47:34.524057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:39.886 [2024-10-07 09:47:34.524089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:39.886 [2024-10-07 09:47:34.524109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:39.886 [2024-10-07 09:47:34.524127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:39.886 [2024-10-07 09:47:34.524281] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.886 task offset: 16384 on job bdev=Nvme10n1 fails 00:26:39.886 00:26:39.886 Latency(us) 00:26:39.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme1n1 ended in about 0.86 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme1n1 : 0.86 152.75 9.55 74.06 0.00 278797.48 19709.35 265639.25 00:26:39.886 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme2n1 ended in about 0.87 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme2n1 : 0.87 146.78 9.17 73.39 0.00 280761.08 19612.25 265639.25 00:26:39.886 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme3n1 ended in about 0.88 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme3n1 : 0.88 146.22 9.14 73.11 0.00 275271.68 19418.07 265639.25 00:26:39.886 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme4n1 ended in about 0.88 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme4n1 : 0.88 145.14 9.07 72.57 0.00 270771.90 29515.47 268746.15 00:26:39.886 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme5n1 ended in about 0.89 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme5n1 : 0.89 144.61 9.04 72.30 0.00 265277.69 22233.69 282727.16 00:26:39.886 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme6n1 ended in about 0.89 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme6n1 : 0.89 148.58 9.29 72.04 0.00 254707.92 20583.16 267192.70 00:26:39.886 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.886 Job: Nvme7n1 ended in about 0.86 seconds with error 00:26:39.886 Verification LBA range: start 0x0 length 0x400 00:26:39.886 Nvme7n1 : 0.86 148.70 9.29 74.35 0.00 244704.14 33787.45 248551.35 00:26:39.887 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.887 Job: Nvme8n1 ended in about 0.89 seconds with error 00:26:39.887 Verification LBA range: start 0x0 length 0x400 00:26:39.887 Nvme8n1 : 0.89 215.72 13.48 20.22 0.00 217490.52 18350.08 268746.15 00:26:39.887 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.887 Job: Nvme9n1 ended in about 0.89 seconds with error 00:26:39.887 Verification LBA range: start 0x0 length 0x400 00:26:39.887 Nvme9n1 : 0.89 143.27 8.95 71.64 0.00 243063.97 21165.70 267192.70 00:26:39.887 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.887 Job: Nvme10n1 ended in about 0.84 seconds with error 00:26:39.887 Verification LBA range: start 0x0 length 0x400 00:26:39.887 Nvme10n1 : 0.84 152.25 9.52 76.13 0.00 219273.54 4247.70 293601.28 00:26:39.887 =================================================================================================================== 00:26:39.887 Total : 1544.03 96.50 679.81 0.00 254713.19 4247.70 293601.28 00:26:39.887 [2024-10-07 09:47:34.556624] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:39.887 [2024-10-07 09:47:34.556718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:39.887 [2024-10-07 09:47:34.557036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.557074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3aa60 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.557095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3aa60 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.557214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.557242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236faf0 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.557259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236faf0 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.557391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.557419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23689f0 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.557435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23689f0 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.557576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.557602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ead380 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.557619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ead380 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.559094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:39.887 [2024-10-07 09:47:34.559124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:39.887 [2024-10-07 09:47:34.559149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:39.887 [2024-10-07 09:47:34.559167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.887 [2024-10-07 09:47:34.559184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:39.887 [2024-10-07 09:47:34.559395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.559425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a0ff0 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.559442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ff0 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.559467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3aa60 (9): Bad file descriptor 00:26:39.887 [2024-10-07 09:47:34.559490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236faf0 (9): Bad file descriptor 00:26:39.887 [2024-10-07 09:47:34.559508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23689f0 (9): Bad file descriptor 00:26:39.887 [2024-10-07 09:47:34.559526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ead380 (9): Bad file descriptor 00:26:39.887 [2024-10-07 09:47:34.559579] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.887 [2024-10-07 09:47:34.559602] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.887 [2024-10-07 09:47:34.559636] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.887 [2024-10-07 09:47:34.559655] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.887 [2024-10-07 09:47:34.559971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.560002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f41010 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.560019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41010 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.560132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.560159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3b9f0 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.560175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b9f0 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.560300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.560327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39990 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.560349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39990 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.560558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.560594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f44f70 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.560611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44f70 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.560755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.887 [2024-10-07 09:47:34.560781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396130 with addr=10.0.0.2, port=4420 00:26:39.887 [2024-10-07 09:47:34.560797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396130 is same with the state(6) to be set 00:26:39.887 [2024-10-07 09:47:34.560816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0ff0 (9): Bad file descriptor 00:26:39.887 [2024-10-07 09:47:34.560840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:39.887 [2024-10-07 09:47:34.560854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:39.887 [2024-10-07 09:47:34.560870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:39.887 [2024-10-07 09:47:34.560898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:39.887 [2024-10-07 09:47:34.560915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:39.887 [2024-10-07 09:47:34.560928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:39.887 [2024-10-07 09:47:34.560948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:39.887 [2024-10-07 09:47:34.560975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:39.887 [2024-10-07 09:47:34.560988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:39.887 [2024-10-07 09:47:34.561005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:39.887 [2024-10-07 09:47:34.561019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f41010 (9): Bad file descriptor 00:26:39.888 [2024-10-07 09:47:34.561207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b9f0 (9): Bad file descriptor 00:26:39.888 [2024-10-07 09:47:34.561225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39990 (9): Bad file descriptor 00:26:39.888 [2024-10-07 09:47:34.561243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44f70 (9): Bad file descriptor 00:26:39.888 [2024-10-07 09:47:34.561260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396130 (9): Bad file descriptor 00:26:39.888 [2024-10-07 09:47:34.561276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:39.888 [2024-10-07 09:47:34.561551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:39.888 [2024-10-07 09:47:34.561564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:39.888 [2024-10-07 09:47:34.561601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.888 [2024-10-07 09:47:34.561655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.456 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1598370 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1598370 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1598370 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:41.393 rmmod nvme_tcp 00:26:41.393 rmmod nvme_fabrics 00:26:41.393 rmmod nvme_keyring 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1598190 ']' 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1598190 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1598190 ']' 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1598190 00:26:41.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1598190) - No such process 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1598190 is not found' 00:26:41.393 Process with pid 1598190 is not found 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.393 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:43.927 00:26:43.927 real 0m7.872s 00:26:43.927 user 0m19.836s 00:26:43.927 sys 0m1.580s 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.927 ************************************ 00:26:43.927 END TEST nvmf_shutdown_tc3 00:26:43.927 ************************************ 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.927 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.927 ************************************ 00:26:43.927 START TEST nvmf_shutdown_tc4 00:26:43.928 ************************************ 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:43.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:43.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:43.928 Found net devices under 0000:84:00.0: cvl_0_0 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:43.928 Found net devices under 0000:84:00.1: cvl_0_1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.928 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:26:43.928 00:26:43.929 --- 10.0.0.2 ping statistics --- 00:26:43.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.929 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:26:43.929 00:26:43.929 --- 10.0.0.1 ping statistics --- 00:26:43.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.929 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1599269 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1599269 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1599269 ']' 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.929 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 [2024-10-07 09:47:38.568996] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:43.929 [2024-10-07 09:47:38.569071] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.929 [2024-10-07 09:47:38.661364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.188 [2024-10-07 09:47:38.837454] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.188 [2024-10-07 09:47:38.837569] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.188 [2024-10-07 09:47:38.837606] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.188 [2024-10-07 09:47:38.837650] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.188 [2024-10-07 09:47:38.837676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.188 [2024-10-07 09:47:38.840789] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.188 [2024-10-07 09:47:38.840874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.188 [2024-10-07 09:47:38.840949] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.188 [2024-10-07 09:47:38.840953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.121 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.122 [2024-10-07 09:47:39.917518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.122 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.380 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.380 Malloc1 00:26:45.380 [2024-10-07 09:47:40.004679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.380 Malloc2 00:26:45.380 Malloc3 00:26:45.380 Malloc4 00:26:45.380 Malloc5 00:26:45.638 Malloc6 00:26:45.638 Malloc7 00:26:45.638 Malloc8 00:26:45.638 Malloc9 00:26:45.638 Malloc10 00:26:45.638 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.638 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:45.638 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.638 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.896 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1599577 00:26:45.896 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:45.896 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:45.896 [2024-10-07 09:47:40.515811] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1599269 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1599269 ']' 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1599269 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599269 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599269' 00:26:51.169 killing process with pid 1599269 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1599269 00:26:51.169 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1599269 00:26:51.169 [2024-10-07 09:47:45.530830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123eba0 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.530942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123eba0 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.530960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123eba0 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.530982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123eba0 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.531476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f070 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123f540 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.169 [2024-10-07 09:47:45.532396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 [2024-10-07 09:47:45.532529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e6d0 is same with the state(6) to be set 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 [2024-10-07 09:47:45.537398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 [2024-10-07 09:47:45.538603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.170 starting I/O failed: -6 00:26:51.170 starting I/O failed: -6 00:26:51.170 starting I/O failed: -6 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 starting I/O failed: -6 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.170 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 [2024-10-07 09:47:45.540105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 [2024-10-07 09:47:45.542015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.171 NVMe io qpair process completion error 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 starting I/O failed: -6 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.171 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 [2024-10-07 09:47:45.543300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 [2024-10-07 09:47:45.544443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 [2024-10-07 09:47:45.545765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.172 Write completed with error (sct=0, sc=8) 00:26:51.172 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 [2024-10-07 09:47:45.547941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.173 NVMe io qpair process completion error 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 [2024-10-07 09:47:45.549296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.173 starting I/O failed: -6 00:26:51.173 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 [2024-10-07 09:47:45.550382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 [2024-10-07 09:47:45.551771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.174 starting I/O failed: -6 00:26:51.174 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 [2024-10-07 09:47:45.554398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.175 NVMe io qpair process completion error 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 [2024-10-07 09:47:45.555822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 [2024-10-07 09:47:45.556924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 starting I/O failed: -6 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.175 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 [2024-10-07 09:47:45.558303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 [2024-10-07 09:47:45.560548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.176 NVMe io qpair process completion error 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 starting I/O failed: -6 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.176 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 [2024-10-07 09:47:45.562056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 [2024-10-07 09:47:45.563148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 [2024-10-07 09:47:45.564636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.177 Write completed with error (sct=0, sc=8) 00:26:51.177 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 [2024-10-07 09:47:45.569713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.178 NVMe io qpair process completion error 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 [2024-10-07 09:47:45.571083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 [2024-10-07 09:47:45.572219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.178 starting I/O failed: -6 00:26:51.178 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 [2024-10-07 09:47:45.573666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 [2024-10-07 09:47:45.577733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.179 NVMe io qpair process completion error 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 Write completed with error (sct=0, sc=8) 00:26:51.179 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 [2024-10-07 09:47:45.579287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 [2024-10-07 09:47:45.580363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 [2024-10-07 09:47:45.581799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.180 Write completed with error (sct=0, sc=8) 00:26:51.180 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 [2024-10-07 09:47:45.584634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.181 NVMe io qpair process completion error 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 [2024-10-07 09:47:45.587030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.181 Write completed with error (sct=0, sc=8) 00:26:51.181 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 [2024-10-07 09:47:45.588422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 [2024-10-07 09:47:45.590953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.182 NVMe io qpair process completion error 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 starting I/O failed: -6 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.182 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 [2024-10-07 09:47:45.592360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 [2024-10-07 09:47:45.593606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 [2024-10-07 09:47:45.594971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.183 starting I/O failed: -6 00:26:51.183 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 [2024-10-07 09:47:45.599382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.184 NVMe io qpair process completion error 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 [2024-10-07 09:47:45.600714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 Write completed with error (sct=0, sc=8) 00:26:51.184 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 [2024-10-07 09:47:45.601921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 [2024-10-07 09:47:45.603319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 Write completed with error (sct=0, sc=8) 00:26:51.185 starting I/O failed: -6 00:26:51.185 [2024-10-07 09:47:45.608267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.185 NVMe io qpair process completion error 00:26:51.185 Initializing NVMe Controllers 00:26:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:51.185 Controller IO queue size 128, less than required. 00:26:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.185 Controller IO queue size 128, less than required. 00:26:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:51.185 Controller IO queue size 128, less than required. 00:26:51.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:51.186 Controller IO queue size 128, less than required. 00:26:51.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:51.186 Initialization complete. Launching workers. 00:26:51.186 ======================================================== 00:26:51.186 Latency(us) 00:26:51.186 Device Information : IOPS MiB/s Average min max 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1707.38 73.36 74988.40 957.31 133250.61 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1680.26 72.20 75130.45 1082.36 133552.80 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1689.57 72.60 74737.05 994.96 129197.91 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1669.70 71.74 75655.36 1006.20 129059.84 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1690.82 72.65 74745.14 1105.04 130094.68 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1677.98 72.10 75359.31 973.33 137819.10 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1686.05 72.45 75061.21 1210.18 129694.85 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1672.60 71.87 75724.47 906.51 148391.11 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1690.82 72.65 74942.60 1391.91 129422.12 00:26:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1680.05 72.19 75455.95 1158.65 130062.87 00:26:51.186 ======================================================== 00:26:51.186 Total : 16845.22 723.82 75178.41 906.51 148391.11 00:26:51.186 00:26:51.186 [2024-10-07 09:47:45.612496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a6a0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10657f0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10659d0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a370 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063ab0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1069d10 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063780 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.612963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063de0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.613020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065bb0 is same with the state(6) to be set 00:26:51.186 [2024-10-07 09:47:45.613077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a040 is same with the state(6) to be set 00:26:51.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:51.445 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1599577 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1599577 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1599577 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.381 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.381 rmmod nvme_tcp 00:26:52.641 rmmod nvme_fabrics 00:26:52.641 rmmod nvme_keyring 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1599269 ']' 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1599269 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1599269 ']' 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1599269 00:26:52.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1599269) - No such process 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1599269 is not found' 00:26:52.641 Process with pid 1599269 is not found 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.641 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.543 00:26:54.543 real 0m11.000s 00:26:54.543 user 0m28.782s 00:26:54.543 sys 0m6.244s 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:54.543 ************************************ 00:26:54.543 END TEST nvmf_shutdown_tc4 00:26:54.543 ************************************ 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:54.543 00:26:54.543 real 0m40.275s 00:26:54.543 user 1m50.729s 00:26:54.543 sys 0m13.329s 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.543 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:54.543 ************************************ 00:26:54.543 END TEST nvmf_shutdown 00:26:54.543 ************************************ 00:26:54.802 09:47:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:54.802 00:26:54.802 real 13m34.171s 00:26:54.802 user 32m25.484s 00:26:54.802 sys 3m10.851s 00:26:54.802 09:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.802 09:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:54.802 ************************************ 00:26:54.802 END TEST nvmf_target_extra 00:26:54.802 ************************************ 00:26:54.802 09:47:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:54.802 09:47:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:54.802 09:47:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.802 09:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.802 ************************************ 00:26:54.802 START TEST nvmf_host 00:26:54.802 ************************************ 00:26:54.802 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:54.802 * Looking for test storage... 00:26:54.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:54.802 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:54.802 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:54.802 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:55.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.062 --rc genhtml_branch_coverage=1 00:26:55.062 --rc genhtml_function_coverage=1 00:26:55.062 --rc genhtml_legend=1 00:26:55.062 --rc geninfo_all_blocks=1 00:26:55.062 --rc geninfo_unexecuted_blocks=1 00:26:55.062 00:26:55.062 ' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:55.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.062 --rc genhtml_branch_coverage=1 00:26:55.062 --rc genhtml_function_coverage=1 00:26:55.062 --rc genhtml_legend=1 00:26:55.062 --rc geninfo_all_blocks=1 00:26:55.062 --rc geninfo_unexecuted_blocks=1 00:26:55.062 00:26:55.062 ' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:55.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.062 --rc genhtml_branch_coverage=1 00:26:55.062 --rc genhtml_function_coverage=1 00:26:55.062 --rc genhtml_legend=1 00:26:55.062 --rc geninfo_all_blocks=1 00:26:55.062 --rc geninfo_unexecuted_blocks=1 00:26:55.062 00:26:55.062 ' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:55.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.062 --rc genhtml_branch_coverage=1 00:26:55.062 --rc genhtml_function_coverage=1 00:26:55.062 --rc genhtml_legend=1 00:26:55.062 --rc geninfo_all_blocks=1 00:26:55.062 --rc geninfo_unexecuted_blocks=1 00:26:55.062 00:26:55.062 ' 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.062 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.063 ************************************ 00:26:55.063 START TEST nvmf_multicontroller 00:26:55.063 ************************************ 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:55.063 * Looking for test storage... 00:26:55.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:26:55.063 09:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:55.322 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.323 --rc genhtml_branch_coverage=1 00:26:55.323 --rc genhtml_function_coverage=1 00:26:55.323 --rc genhtml_legend=1 00:26:55.323 --rc geninfo_all_blocks=1 00:26:55.323 --rc geninfo_unexecuted_blocks=1 00:26:55.323 00:26:55.323 ' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.323 --rc genhtml_branch_coverage=1 00:26:55.323 --rc genhtml_function_coverage=1 00:26:55.323 --rc genhtml_legend=1 00:26:55.323 --rc geninfo_all_blocks=1 00:26:55.323 --rc geninfo_unexecuted_blocks=1 00:26:55.323 00:26:55.323 ' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.323 --rc genhtml_branch_coverage=1 00:26:55.323 --rc genhtml_function_coverage=1 00:26:55.323 --rc genhtml_legend=1 00:26:55.323 --rc geninfo_all_blocks=1 00:26:55.323 --rc geninfo_unexecuted_blocks=1 00:26:55.323 00:26:55.323 ' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.323 --rc genhtml_branch_coverage=1 00:26:55.323 --rc genhtml_function_coverage=1 00:26:55.323 --rc genhtml_legend=1 00:26:55.323 --rc geninfo_all_blocks=1 00:26:55.323 --rc geninfo_unexecuted_blocks=1 00:26:55.323 00:26:55.323 ' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.323 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.324 09:47:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:57.857 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:57.857 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:57.857 Found net devices under 0000:84:00.0: cvl_0_0 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:57.857 Found net devices under 0000:84:00.1: cvl_0_1 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:26:57.857 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:57.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:26:57.858 00:26:57.858 --- 10.0.0.2 ping statistics --- 00:26:57.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.858 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:26:57.858 00:26:57.858 --- 10.0.0.1 ping statistics --- 00:26:57.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.858 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:57.858 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1602394 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1602394 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1602394 ']' 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.118 09:47:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.118 [2024-10-07 09:47:52.729361] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:26:58.118 [2024-10-07 09:47:52.729443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.118 [2024-10-07 09:47:52.828054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.376 [2024-10-07 09:47:53.017296] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.376 [2024-10-07 09:47:53.017410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.376 [2024-10-07 09:47:53.017447] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.376 [2024-10-07 09:47:53.017488] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.376 [2024-10-07 09:47:53.017497] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.376 [2024-10-07 09:47:53.019023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.376 [2024-10-07 09:47:53.019107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.376 [2024-10-07 09:47:53.019111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.376 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.377 [2024-10-07 09:47:53.170541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.377 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.377 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 Malloc0 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 [2024-10-07 09:47:53.236900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 [2024-10-07 09:47:53.244767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 Malloc1 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1602542 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1602542 /var/tmp/bdevperf.sock 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1602542 ']' 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.681 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:58.966 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.966 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:58.966 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:58.966 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.966 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.224 NVMe0n1 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.224 1 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.224 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.224 request: 00:26:59.224 { 00:26:59.224 "name": "NVMe0", 00:26:59.224 "trtype": "tcp", 00:26:59.224 "traddr": "10.0.0.2", 00:26:59.224 "adrfam": "ipv4", 00:26:59.224 "trsvcid": "4420", 00:26:59.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.224 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:59.224 "hostaddr": "10.0.0.1", 00:26:59.225 "prchk_reftag": false, 00:26:59.225 "prchk_guard": false, 00:26:59.225 "hdgst": false, 00:26:59.225 "ddgst": false, 00:26:59.225 "allow_unrecognized_csi": false, 00:26:59.225 "method": "bdev_nvme_attach_controller", 00:26:59.225 "req_id": 1 00:26:59.225 } 00:26:59.225 Got JSON-RPC error response 00:26:59.225 response: 00:26:59.225 { 00:26:59.225 "code": -114, 00:26:59.225 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:59.225 } 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.225 request: 00:26:59.225 { 00:26:59.225 "name": "NVMe0", 00:26:59.225 "trtype": "tcp", 00:26:59.225 "traddr": "10.0.0.2", 00:26:59.225 "adrfam": "ipv4", 00:26:59.225 "trsvcid": "4420", 00:26:59.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.225 "hostaddr": "10.0.0.1", 00:26:59.225 "prchk_reftag": false, 00:26:59.225 "prchk_guard": false, 00:26:59.225 "hdgst": false, 00:26:59.225 "ddgst": false, 00:26:59.225 "allow_unrecognized_csi": false, 00:26:59.225 "method": "bdev_nvme_attach_controller", 00:26:59.225 "req_id": 1 00:26:59.225 } 00:26:59.225 Got JSON-RPC error response 00:26:59.225 response: 00:26:59.225 { 00:26:59.225 "code": -114, 00:26:59.225 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:59.225 } 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.225 request: 00:26:59.225 { 00:26:59.225 "name": "NVMe0", 00:26:59.225 "trtype": "tcp", 00:26:59.225 "traddr": "10.0.0.2", 00:26:59.225 "adrfam": "ipv4", 00:26:59.225 "trsvcid": "4420", 00:26:59.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.225 "hostaddr": "10.0.0.1", 00:26:59.225 "prchk_reftag": false, 00:26:59.225 "prchk_guard": false, 00:26:59.225 "hdgst": false, 00:26:59.225 "ddgst": false, 00:26:59.225 "multipath": "disable", 00:26:59.225 "allow_unrecognized_csi": false, 00:26:59.225 "method": "bdev_nvme_attach_controller", 00:26:59.225 "req_id": 1 00:26:59.225 } 00:26:59.225 Got JSON-RPC error response 00:26:59.225 response: 00:26:59.225 { 00:26:59.225 "code": -114, 00:26:59.225 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:59.225 } 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.225 request: 00:26:59.225 { 00:26:59.225 "name": "NVMe0", 00:26:59.225 "trtype": "tcp", 00:26:59.225 "traddr": "10.0.0.2", 00:26:59.225 "adrfam": "ipv4", 00:26:59.225 "trsvcid": "4420", 00:26:59.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.225 "hostaddr": "10.0.0.1", 00:26:59.225 "prchk_reftag": false, 00:26:59.225 "prchk_guard": false, 00:26:59.225 "hdgst": false, 00:26:59.225 "ddgst": false, 00:26:59.225 "multipath": "failover", 00:26:59.225 "allow_unrecognized_csi": false, 00:26:59.225 "method": "bdev_nvme_attach_controller", 00:26:59.225 "req_id": 1 00:26:59.225 } 00:26:59.225 Got JSON-RPC error response 00:26:59.225 response: 00:26:59.225 { 00:26:59.225 "code": -114, 00:26:59.225 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:59.225 } 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.225 09:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.484 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.484 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:59.484 09:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.858 { 00:27:00.858 "results": [ 00:27:00.858 { 00:27:00.858 "job": "NVMe0n1", 00:27:00.858 "core_mask": "0x1", 00:27:00.858 "workload": "write", 00:27:00.858 "status": "finished", 00:27:00.858 "queue_depth": 128, 00:27:00.858 "io_size": 4096, 00:27:00.858 "runtime": 1.004272, 00:27:00.858 "iops": 18597.55126101295, 00:27:00.858 "mibps": 72.64668461333184, 00:27:00.858 "io_failed": 0, 00:27:00.858 "io_timeout": 0, 00:27:00.858 "avg_latency_us": 6872.503814436056, 00:27:00.858 "min_latency_us": 4320.521481481482, 00:27:00.858 "max_latency_us": 16311.182222222222 00:27:00.858 } 00:27:00.858 ], 00:27:00.858 "core_count": 1 00:27:00.858 } 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1602542 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1602542 ']' 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1602542 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602542 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602542' 00:27:00.858 killing process with pid 1602542 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1602542 00:27:00.858 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1602542 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:01.117 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:01.117 [2024-10-07 09:47:53.365032] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:01.117 [2024-10-07 09:47:53.365149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602542 ] 00:27:01.117 [2024-10-07 09:47:53.436938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.117 [2024-10-07 09:47:53.550936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.117 [2024-10-07 09:47:54.179406] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name ead4af47-6238-43fc-be5b-fca383d8f511 already exists 00:27:01.117 [2024-10-07 09:47:54.179447] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:ead4af47-6238-43fc-be5b-fca383d8f511 alias for bdev NVMe1n1 00:27:01.117 [2024-10-07 09:47:54.179478] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:01.117 Running I/O for 1 seconds... 00:27:01.117 18549.00 IOPS, 72.46 MiB/s 00:27:01.117 Latency(us) 00:27:01.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.117 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:01.117 NVMe0n1 : 1.00 18597.55 72.65 0.00 0.00 6872.50 4320.52 16311.18 00:27:01.117 =================================================================================================================== 00:27:01.117 Total : 18597.55 72.65 0.00 0.00 6872.50 4320.52 16311.18 00:27:01.117 Received shutdown signal, test time was about 1.000000 seconds 00:27:01.117 00:27:01.117 Latency(us) 00:27:01.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.117 =================================================================================================================== 00:27:01.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.117 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.117 rmmod nvme_tcp 00:27:01.117 rmmod nvme_fabrics 00:27:01.117 rmmod nvme_keyring 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1602394 ']' 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1602394 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1602394 ']' 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1602394 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602394 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602394' 00:27:01.117 killing process with pid 1602394 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1602394 00:27:01.117 09:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1602394 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:01.376 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:27:01.635 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.635 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.635 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.635 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.635 09:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.537 00:27:03.537 real 0m8.485s 00:27:03.537 user 0m12.807s 00:27:03.537 sys 0m2.907s 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:03.537 ************************************ 00:27:03.537 END TEST nvmf_multicontroller 00:27:03.537 ************************************ 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.537 ************************************ 00:27:03.537 START TEST nvmf_aer 00:27:03.537 ************************************ 00:27:03.537 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:03.794 * Looking for test storage... 00:27:03.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:03.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.795 --rc genhtml_branch_coverage=1 00:27:03.795 --rc genhtml_function_coverage=1 00:27:03.795 --rc genhtml_legend=1 00:27:03.795 --rc geninfo_all_blocks=1 00:27:03.795 --rc geninfo_unexecuted_blocks=1 00:27:03.795 00:27:03.795 ' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:03.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.795 --rc genhtml_branch_coverage=1 00:27:03.795 --rc genhtml_function_coverage=1 00:27:03.795 --rc genhtml_legend=1 00:27:03.795 --rc geninfo_all_blocks=1 00:27:03.795 --rc geninfo_unexecuted_blocks=1 00:27:03.795 00:27:03.795 ' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:03.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.795 --rc genhtml_branch_coverage=1 00:27:03.795 --rc genhtml_function_coverage=1 00:27:03.795 --rc genhtml_legend=1 00:27:03.795 --rc geninfo_all_blocks=1 00:27:03.795 --rc geninfo_unexecuted_blocks=1 00:27:03.795 00:27:03.795 ' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:03.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.795 --rc genhtml_branch_coverage=1 00:27:03.795 --rc genhtml_function_coverage=1 00:27:03.795 --rc genhtml_legend=1 00:27:03.795 --rc geninfo_all_blocks=1 00:27:03.795 --rc geninfo_unexecuted_blocks=1 00:27:03.795 00:27:03.795 ' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.795 09:47:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.081 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:07.082 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:07.082 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:07.082 Found net devices under 0000:84:00.0: cvl_0_0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:07.082 Found net devices under 0000:84:00.1: cvl_0_1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:27:07.082 00:27:07.082 --- 10.0.0.2 ping statistics --- 00:27:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.082 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:27:07.082 00:27:07.082 --- 10.0.0.1 ping statistics --- 00:27:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.082 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1604882 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1604882 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1604882 ']' 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.082 [2024-10-07 09:48:01.426851] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:07.082 [2024-10-07 09:48:01.427024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.082 [2024-10-07 09:48:01.531293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.082 [2024-10-07 09:48:01.654028] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.082 [2024-10-07 09:48:01.654096] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.082 [2024-10-07 09:48:01.654112] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.082 [2024-10-07 09:48:01.654126] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.082 [2024-10-07 09:48:01.654138] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.082 [2024-10-07 09:48:01.656003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.082 [2024-10-07 09:48:01.656059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.082 [2024-10-07 09:48:01.656172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.082 [2024-10-07 09:48:01.656175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.082 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 [2024-10-07 09:48:01.835647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 Malloc0 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.083 [2024-10-07 09:48:01.886858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.083 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.341 [ 00:27:07.341 { 00:27:07.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:07.341 "subtype": "Discovery", 00:27:07.341 "listen_addresses": [], 00:27:07.341 "allow_any_host": true, 00:27:07.341 "hosts": [] 00:27:07.341 }, 00:27:07.341 { 00:27:07.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.341 "subtype": "NVMe", 00:27:07.341 "listen_addresses": [ 00:27:07.341 { 00:27:07.341 "trtype": "TCP", 00:27:07.341 "adrfam": "IPv4", 00:27:07.341 "traddr": "10.0.0.2", 00:27:07.341 "trsvcid": "4420" 00:27:07.341 } 00:27:07.341 ], 00:27:07.341 "allow_any_host": true, 00:27:07.341 "hosts": [], 00:27:07.341 "serial_number": "SPDK00000000000001", 00:27:07.341 "model_number": "SPDK bdev Controller", 00:27:07.341 "max_namespaces": 2, 00:27:07.341 "min_cntlid": 1, 00:27:07.341 "max_cntlid": 65519, 00:27:07.341 "namespaces": [ 00:27:07.341 { 00:27:07.341 "nsid": 1, 00:27:07.341 "bdev_name": "Malloc0", 00:27:07.341 "name": "Malloc0", 00:27:07.341 "nguid": "D0F1D5E59CA04B96B6CF81D580642EE8", 00:27:07.341 "uuid": "d0f1d5e5-9ca0-4b96-b6cf-81d580642ee8" 00:27:07.341 } 00:27:07.341 ] 00:27:07.341 } 00:27:07.341 ] 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1605015 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:07.341 09:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:07.341 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.599 Malloc1 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.599 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.599 Asynchronous Event Request test 00:27:07.599 Attaching to 10.0.0.2 00:27:07.599 Attached to 10.0.0.2 00:27:07.599 Registering asynchronous event callbacks... 00:27:07.599 Starting namespace attribute notice tests for all controllers... 00:27:07.599 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:07.599 aer_cb - Changed Namespace 00:27:07.599 Cleaning up... 00:27:07.599 [ 00:27:07.599 { 00:27:07.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:07.599 "subtype": "Discovery", 00:27:07.599 "listen_addresses": [], 00:27:07.599 "allow_any_host": true, 00:27:07.599 "hosts": [] 00:27:07.599 }, 00:27:07.599 { 00:27:07.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.599 "subtype": "NVMe", 00:27:07.599 "listen_addresses": [ 00:27:07.599 { 00:27:07.599 "trtype": "TCP", 00:27:07.599 "adrfam": "IPv4", 00:27:07.599 "traddr": "10.0.0.2", 00:27:07.599 "trsvcid": "4420" 00:27:07.599 } 00:27:07.599 ], 00:27:07.599 "allow_any_host": true, 00:27:07.599 "hosts": [], 00:27:07.599 "serial_number": "SPDK00000000000001", 00:27:07.599 "model_number": "SPDK bdev Controller", 00:27:07.600 "max_namespaces": 2, 00:27:07.600 "min_cntlid": 1, 00:27:07.600 "max_cntlid": 65519, 00:27:07.600 "namespaces": [ 00:27:07.600 { 00:27:07.600 "nsid": 1, 00:27:07.600 "bdev_name": "Malloc0", 00:27:07.600 "name": "Malloc0", 00:27:07.600 "nguid": "D0F1D5E59CA04B96B6CF81D580642EE8", 00:27:07.600 "uuid": "d0f1d5e5-9ca0-4b96-b6cf-81d580642ee8" 00:27:07.600 }, 00:27:07.600 { 00:27:07.600 "nsid": 2, 00:27:07.600 "bdev_name": "Malloc1", 00:27:07.600 "name": "Malloc1", 00:27:07.600 "nguid": "FACC07E9153B4AECA125B06ED1F3A33D", 00:27:07.600 "uuid": "facc07e9-153b-4aec-a125-b06ed1f3a33d" 00:27:07.600 } 00:27:07.600 ] 00:27:07.600 } 00:27:07.600 ] 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1605015 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.600 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.600 rmmod nvme_tcp 00:27:07.600 rmmod nvme_fabrics 00:27:07.600 rmmod nvme_keyring 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1604882 ']' 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1604882 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1604882 ']' 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1604882 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604882 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604882' 00:27:07.868 killing process with pid 1604882 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1604882 00:27:07.868 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1604882 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.127 09:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.029 09:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.029 00:27:10.029 real 0m6.516s 00:27:10.029 user 0m5.386s 00:27:10.029 sys 0m2.617s 00:27:10.029 09:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.029 09:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:10.029 ************************************ 00:27:10.029 END TEST nvmf_aer 00:27:10.029 ************************************ 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.287 ************************************ 00:27:10.287 START TEST nvmf_async_init 00:27:10.287 ************************************ 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:10.287 * Looking for test storage... 00:27:10.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:27:10.287 09:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:10.287 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.288 --rc genhtml_branch_coverage=1 00:27:10.288 --rc genhtml_function_coverage=1 00:27:10.288 --rc genhtml_legend=1 00:27:10.288 --rc geninfo_all_blocks=1 00:27:10.288 --rc geninfo_unexecuted_blocks=1 00:27:10.288 00:27:10.288 ' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.288 --rc genhtml_branch_coverage=1 00:27:10.288 --rc genhtml_function_coverage=1 00:27:10.288 --rc genhtml_legend=1 00:27:10.288 --rc geninfo_all_blocks=1 00:27:10.288 --rc geninfo_unexecuted_blocks=1 00:27:10.288 00:27:10.288 ' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.288 --rc genhtml_branch_coverage=1 00:27:10.288 --rc genhtml_function_coverage=1 00:27:10.288 --rc genhtml_legend=1 00:27:10.288 --rc geninfo_all_blocks=1 00:27:10.288 --rc geninfo_unexecuted_blocks=1 00:27:10.288 00:27:10.288 ' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.288 --rc genhtml_branch_coverage=1 00:27:10.288 --rc genhtml_function_coverage=1 00:27:10.288 --rc genhtml_legend=1 00:27:10.288 --rc geninfo_all_blocks=1 00:27:10.288 --rc geninfo_unexecuted_blocks=1 00:27:10.288 00:27:10.288 ' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8866563dd07c42edb263be18042520d4 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.288 09:48:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:12.819 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:12.819 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:12.819 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:12.820 Found net devices under 0000:84:00.0: cvl_0_0 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:12.820 Found net devices under 0000:84:00.1: cvl_0_1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:27:12.820 00:27:12.820 --- 10.0.0.2 ping statistics --- 00:27:12.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.820 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:12.820 00:27:12.820 --- 10.0.0.1 ping statistics --- 00:27:12.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.820 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1607126 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1607126 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1607126 ']' 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:12.820 09:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:12.820 [2024-10-07 09:48:07.562854] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:12.820 [2024-10-07 09:48:07.562984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.078 [2024-10-07 09:48:07.658053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.078 [2024-10-07 09:48:07.772156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.078 [2024-10-07 09:48:07.772224] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.078 [2024-10-07 09:48:07.772242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.078 [2024-10-07 09:48:07.772256] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.078 [2024-10-07 09:48:07.772268] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.078 [2024-10-07 09:48:07.773018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 [2024-10-07 09:48:08.593281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 null0 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8866563dd07c42edb263be18042520d4 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.011 [2024-10-07 09:48:08.633552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.011 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 nvme0n1 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 [ 00:27:14.270 { 00:27:14.270 "name": "nvme0n1", 00:27:14.270 "aliases": [ 00:27:14.270 "8866563d-d07c-42ed-b263-be18042520d4" 00:27:14.270 ], 00:27:14.270 "product_name": "NVMe disk", 00:27:14.270 "block_size": 512, 00:27:14.270 "num_blocks": 2097152, 00:27:14.270 "uuid": "8866563d-d07c-42ed-b263-be18042520d4", 00:27:14.270 "numa_id": 1, 00:27:14.270 "assigned_rate_limits": { 00:27:14.270 "rw_ios_per_sec": 0, 00:27:14.270 "rw_mbytes_per_sec": 0, 00:27:14.270 "r_mbytes_per_sec": 0, 00:27:14.270 "w_mbytes_per_sec": 0 00:27:14.270 }, 00:27:14.270 "claimed": false, 00:27:14.270 "zoned": false, 00:27:14.270 "supported_io_types": { 00:27:14.270 "read": true, 00:27:14.270 "write": true, 00:27:14.270 "unmap": false, 00:27:14.270 "flush": true, 00:27:14.270 "reset": true, 00:27:14.270 "nvme_admin": true, 00:27:14.270 "nvme_io": true, 00:27:14.270 "nvme_io_md": false, 00:27:14.270 "write_zeroes": true, 00:27:14.270 "zcopy": false, 00:27:14.270 "get_zone_info": false, 00:27:14.270 "zone_management": false, 00:27:14.270 "zone_append": false, 00:27:14.270 "compare": true, 00:27:14.270 "compare_and_write": true, 00:27:14.270 "abort": true, 00:27:14.270 "seek_hole": false, 00:27:14.270 "seek_data": false, 00:27:14.270 "copy": true, 00:27:14.270 "nvme_iov_md": false 00:27:14.270 }, 00:27:14.270 "memory_domains": [ 00:27:14.270 { 00:27:14.270 "dma_device_id": "system", 00:27:14.270 "dma_device_type": 1 00:27:14.270 } 00:27:14.270 ], 00:27:14.270 "driver_specific": { 00:27:14.270 "nvme": [ 00:27:14.270 { 00:27:14.270 "trid": { 00:27:14.270 "trtype": "TCP", 00:27:14.270 "adrfam": "IPv4", 00:27:14.270 "traddr": "10.0.0.2", 00:27:14.270 "trsvcid": "4420", 00:27:14.270 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:14.270 }, 00:27:14.270 "ctrlr_data": { 00:27:14.270 "cntlid": 1, 00:27:14.270 "vendor_id": "0x8086", 00:27:14.270 "model_number": "SPDK bdev Controller", 00:27:14.270 "serial_number": "00000000000000000000", 00:27:14.270 "firmware_revision": "25.01", 00:27:14.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.270 "oacs": { 00:27:14.270 "security": 0, 00:27:14.270 "format": 0, 00:27:14.270 "firmware": 0, 00:27:14.270 "ns_manage": 0 00:27:14.270 }, 00:27:14.270 "multi_ctrlr": true, 00:27:14.270 "ana_reporting": false 00:27:14.270 }, 00:27:14.270 "vs": { 00:27:14.270 "nvme_version": "1.3" 00:27:14.270 }, 00:27:14.270 "ns_data": { 00:27:14.270 "id": 1, 00:27:14.270 "can_share": true 00:27:14.270 } 00:27:14.270 } 00:27:14.270 ], 00:27:14.270 "mp_policy": "active_passive" 00:27:14.270 } 00:27:14.270 } 00:27:14.270 ] 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 [2024-10-07 09:48:08.886966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.270 [2024-10-07 09:48:08.887059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24523e0 (9): Bad file descriptor 00:27:14.270 [2024-10-07 09:48:09.029054] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 [ 00:27:14.270 { 00:27:14.270 "name": "nvme0n1", 00:27:14.270 "aliases": [ 00:27:14.270 "8866563d-d07c-42ed-b263-be18042520d4" 00:27:14.270 ], 00:27:14.270 "product_name": "NVMe disk", 00:27:14.270 "block_size": 512, 00:27:14.270 "num_blocks": 2097152, 00:27:14.270 "uuid": "8866563d-d07c-42ed-b263-be18042520d4", 00:27:14.270 "numa_id": 1, 00:27:14.270 "assigned_rate_limits": { 00:27:14.270 "rw_ios_per_sec": 0, 00:27:14.270 "rw_mbytes_per_sec": 0, 00:27:14.270 "r_mbytes_per_sec": 0, 00:27:14.270 "w_mbytes_per_sec": 0 00:27:14.270 }, 00:27:14.270 "claimed": false, 00:27:14.270 "zoned": false, 00:27:14.270 "supported_io_types": { 00:27:14.270 "read": true, 00:27:14.270 "write": true, 00:27:14.270 "unmap": false, 00:27:14.270 "flush": true, 00:27:14.270 "reset": true, 00:27:14.270 "nvme_admin": true, 00:27:14.270 "nvme_io": true, 00:27:14.270 "nvme_io_md": false, 00:27:14.270 "write_zeroes": true, 00:27:14.270 "zcopy": false, 00:27:14.270 "get_zone_info": false, 00:27:14.270 "zone_management": false, 00:27:14.270 "zone_append": false, 00:27:14.270 "compare": true, 00:27:14.270 "compare_and_write": true, 00:27:14.270 "abort": true, 00:27:14.270 "seek_hole": false, 00:27:14.270 "seek_data": false, 00:27:14.270 "copy": true, 00:27:14.270 "nvme_iov_md": false 00:27:14.270 }, 00:27:14.270 "memory_domains": [ 00:27:14.270 { 00:27:14.270 "dma_device_id": "system", 00:27:14.270 "dma_device_type": 1 00:27:14.270 } 00:27:14.270 ], 00:27:14.270 "driver_specific": { 00:27:14.270 "nvme": [ 00:27:14.270 { 00:27:14.270 "trid": { 00:27:14.270 "trtype": "TCP", 00:27:14.270 "adrfam": "IPv4", 00:27:14.270 "traddr": "10.0.0.2", 00:27:14.270 "trsvcid": "4420", 00:27:14.270 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:14.270 }, 00:27:14.270 "ctrlr_data": { 00:27:14.270 "cntlid": 2, 00:27:14.270 "vendor_id": "0x8086", 00:27:14.270 "model_number": "SPDK bdev Controller", 00:27:14.270 "serial_number": "00000000000000000000", 00:27:14.270 "firmware_revision": "25.01", 00:27:14.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.270 "oacs": { 00:27:14.270 "security": 0, 00:27:14.270 "format": 0, 00:27:14.270 "firmware": 0, 00:27:14.270 "ns_manage": 0 00:27:14.270 }, 00:27:14.270 "multi_ctrlr": true, 00:27:14.270 "ana_reporting": false 00:27:14.270 }, 00:27:14.270 "vs": { 00:27:14.270 "nvme_version": "1.3" 00:27:14.270 }, 00:27:14.270 "ns_data": { 00:27:14.270 "id": 1, 00:27:14.270 "can_share": true 00:27:14.270 } 00:27:14.270 } 00:27:14.270 ], 00:27:14.270 "mp_policy": "active_passive" 00:27:14.270 } 00:27:14.270 } 00:27:14.270 ] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bFi6xqraFJ 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bFi6xqraFJ 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bFi6xqraFJ 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.270 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 [2024-10-07 09:48:09.087741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:14.529 [2024-10-07 09:48:09.087946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 [2024-10-07 09:48:09.103775] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:14.529 nvme0n1 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 [ 00:27:14.529 { 00:27:14.529 "name": "nvme0n1", 00:27:14.529 "aliases": [ 00:27:14.529 "8866563d-d07c-42ed-b263-be18042520d4" 00:27:14.529 ], 00:27:14.529 "product_name": "NVMe disk", 00:27:14.529 "block_size": 512, 00:27:14.529 "num_blocks": 2097152, 00:27:14.529 "uuid": "8866563d-d07c-42ed-b263-be18042520d4", 00:27:14.529 "numa_id": 1, 00:27:14.529 "assigned_rate_limits": { 00:27:14.529 "rw_ios_per_sec": 0, 00:27:14.529 "rw_mbytes_per_sec": 0, 00:27:14.529 "r_mbytes_per_sec": 0, 00:27:14.529 "w_mbytes_per_sec": 0 00:27:14.529 }, 00:27:14.529 "claimed": false, 00:27:14.529 "zoned": false, 00:27:14.529 "supported_io_types": { 00:27:14.529 "read": true, 00:27:14.529 "write": true, 00:27:14.529 "unmap": false, 00:27:14.529 "flush": true, 00:27:14.529 "reset": true, 00:27:14.529 "nvme_admin": true, 00:27:14.529 "nvme_io": true, 00:27:14.529 "nvme_io_md": false, 00:27:14.529 "write_zeroes": true, 00:27:14.529 "zcopy": false, 00:27:14.529 "get_zone_info": false, 00:27:14.529 "zone_management": false, 00:27:14.529 "zone_append": false, 00:27:14.529 "compare": true, 00:27:14.529 "compare_and_write": true, 00:27:14.529 "abort": true, 00:27:14.529 "seek_hole": false, 00:27:14.529 "seek_data": false, 00:27:14.529 "copy": true, 00:27:14.529 "nvme_iov_md": false 00:27:14.529 }, 00:27:14.529 "memory_domains": [ 00:27:14.529 { 00:27:14.529 "dma_device_id": "system", 00:27:14.529 "dma_device_type": 1 00:27:14.529 } 00:27:14.529 ], 00:27:14.529 "driver_specific": { 00:27:14.529 "nvme": [ 00:27:14.529 { 00:27:14.529 "trid": { 00:27:14.529 "trtype": "TCP", 00:27:14.529 "adrfam": "IPv4", 00:27:14.529 "traddr": "10.0.0.2", 00:27:14.529 "trsvcid": "4421", 00:27:14.529 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:14.529 }, 00:27:14.529 "ctrlr_data": { 00:27:14.529 "cntlid": 3, 00:27:14.529 "vendor_id": "0x8086", 00:27:14.529 "model_number": "SPDK bdev Controller", 00:27:14.529 "serial_number": "00000000000000000000", 00:27:14.529 "firmware_revision": "25.01", 00:27:14.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.529 "oacs": { 00:27:14.529 "security": 0, 00:27:14.529 "format": 0, 00:27:14.529 "firmware": 0, 00:27:14.529 "ns_manage": 0 00:27:14.529 }, 00:27:14.529 "multi_ctrlr": true, 00:27:14.529 "ana_reporting": false 00:27:14.529 }, 00:27:14.529 "vs": { 00:27:14.529 "nvme_version": "1.3" 00:27:14.529 }, 00:27:14.529 "ns_data": { 00:27:14.529 "id": 1, 00:27:14.529 "can_share": true 00:27:14.529 } 00:27:14.529 } 00:27:14.529 ], 00:27:14.529 "mp_policy": "active_passive" 00:27:14.529 } 00:27:14.529 } 00:27:14.529 ] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bFi6xqraFJ 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.529 rmmod nvme_tcp 00:27:14.529 rmmod nvme_fabrics 00:27:14.529 rmmod nvme_keyring 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1607126 ']' 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1607126 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1607126 ']' 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1607126 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1607126 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1607126' 00:27:14.529 killing process with pid 1607126 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1607126 00:27:14.529 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1607126 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.788 09:48:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.319 00:27:17.319 real 0m6.762s 00:27:17.319 user 0m3.277s 00:27:17.319 sys 0m2.220s 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:17.319 ************************************ 00:27:17.319 END TEST nvmf_async_init 00:27:17.319 ************************************ 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.319 ************************************ 00:27:17.319 START TEST dma 00:27:17.319 ************************************ 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:17.319 * Looking for test storage... 00:27:17.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.319 --rc genhtml_branch_coverage=1 00:27:17.319 --rc genhtml_function_coverage=1 00:27:17.319 --rc genhtml_legend=1 00:27:17.319 --rc geninfo_all_blocks=1 00:27:17.319 --rc geninfo_unexecuted_blocks=1 00:27:17.319 00:27:17.319 ' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.319 --rc genhtml_branch_coverage=1 00:27:17.319 --rc genhtml_function_coverage=1 00:27:17.319 --rc genhtml_legend=1 00:27:17.319 --rc geninfo_all_blocks=1 00:27:17.319 --rc geninfo_unexecuted_blocks=1 00:27:17.319 00:27:17.319 ' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.319 --rc genhtml_branch_coverage=1 00:27:17.319 --rc genhtml_function_coverage=1 00:27:17.319 --rc genhtml_legend=1 00:27:17.319 --rc geninfo_all_blocks=1 00:27:17.319 --rc geninfo_unexecuted_blocks=1 00:27:17.319 00:27:17.319 ' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.319 --rc genhtml_branch_coverage=1 00:27:17.319 --rc genhtml_function_coverage=1 00:27:17.319 --rc genhtml_legend=1 00:27:17.319 --rc geninfo_all_blocks=1 00:27:17.319 --rc geninfo_unexecuted_blocks=1 00:27:17.319 00:27:17.319 ' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.319 09:48:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:17.320 00:27:17.320 real 0m0.193s 00:27:17.320 user 0m0.128s 00:27:17.320 sys 0m0.077s 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:17.320 ************************************ 00:27:17.320 END TEST dma 00:27:17.320 ************************************ 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.320 ************************************ 00:27:17.320 START TEST nvmf_identify 00:27:17.320 ************************************ 00:27:17.320 09:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:17.320 * Looking for test storage... 00:27:17.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.320 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:17.320 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:27:17.320 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:17.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.579 --rc genhtml_branch_coverage=1 00:27:17.579 --rc genhtml_function_coverage=1 00:27:17.579 --rc genhtml_legend=1 00:27:17.579 --rc geninfo_all_blocks=1 00:27:17.579 --rc geninfo_unexecuted_blocks=1 00:27:17.579 00:27:17.579 ' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:17.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.579 --rc genhtml_branch_coverage=1 00:27:17.579 --rc genhtml_function_coverage=1 00:27:17.579 --rc genhtml_legend=1 00:27:17.579 --rc geninfo_all_blocks=1 00:27:17.579 --rc geninfo_unexecuted_blocks=1 00:27:17.579 00:27:17.579 ' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:17.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.579 --rc genhtml_branch_coverage=1 00:27:17.579 --rc genhtml_function_coverage=1 00:27:17.579 --rc genhtml_legend=1 00:27:17.579 --rc geninfo_all_blocks=1 00:27:17.579 --rc geninfo_unexecuted_blocks=1 00:27:17.579 00:27:17.579 ' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:17.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.579 --rc genhtml_branch_coverage=1 00:27:17.579 --rc genhtml_function_coverage=1 00:27:17.579 --rc genhtml_legend=1 00:27:17.579 --rc geninfo_all_blocks=1 00:27:17.579 --rc geninfo_unexecuted_blocks=1 00:27:17.579 00:27:17.579 ' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.579 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.580 09:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.111 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:20.112 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:20.112 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:20.112 Found net devices under 0000:84:00.0: cvl_0_0 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:20.112 Found net devices under 0000:84:00.1: cvl_0_1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:27:20.112 00:27:20.112 --- 10.0.0.2 ping statistics --- 00:27:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.112 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:27:20.112 00:27:20.112 --- 10.0.0.1 ping statistics --- 00:27:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.112 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:27:20.112 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.113 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1609920 00:27:20.371 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:20.371 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.371 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1609920 00:27:20.371 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1609920 ']' 00:27:20.371 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.372 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.372 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.372 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.372 09:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.372 [2024-10-07 09:48:14.989954] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:20.372 [2024-10-07 09:48:14.990058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.372 [2024-10-07 09:48:15.073618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.631 [2024-10-07 09:48:15.193083] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.631 [2024-10-07 09:48:15.193138] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.631 [2024-10-07 09:48:15.193151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.631 [2024-10-07 09:48:15.193163] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.631 [2024-10-07 09:48:15.193172] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.631 [2024-10-07 09:48:15.195029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.631 [2024-10-07 09:48:15.195096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.631 [2024-10-07 09:48:15.195193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.631 [2024-10-07 09:48:15.195196] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 [2024-10-07 09:48:15.331310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 Malloc0 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 [2024-10-07 09:48:15.416972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.631 [ 00:27:20.631 { 00:27:20.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:20.631 "subtype": "Discovery", 00:27:20.631 "listen_addresses": [ 00:27:20.631 { 00:27:20.631 "trtype": "TCP", 00:27:20.631 "adrfam": "IPv4", 00:27:20.631 "traddr": "10.0.0.2", 00:27:20.631 "trsvcid": "4420" 00:27:20.631 } 00:27:20.631 ], 00:27:20.631 "allow_any_host": true, 00:27:20.631 "hosts": [] 00:27:20.631 }, 00:27:20.631 { 00:27:20.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.631 "subtype": "NVMe", 00:27:20.631 "listen_addresses": [ 00:27:20.631 { 00:27:20.631 "trtype": "TCP", 00:27:20.631 "adrfam": "IPv4", 00:27:20.631 "traddr": "10.0.0.2", 00:27:20.631 "trsvcid": "4420" 00:27:20.631 } 00:27:20.631 ], 00:27:20.631 "allow_any_host": true, 00:27:20.631 "hosts": [], 00:27:20.631 "serial_number": "SPDK00000000000001", 00:27:20.631 "model_number": "SPDK bdev Controller", 00:27:20.631 "max_namespaces": 32, 00:27:20.631 "min_cntlid": 1, 00:27:20.631 "max_cntlid": 65519, 00:27:20.631 "namespaces": [ 00:27:20.631 { 00:27:20.631 "nsid": 1, 00:27:20.631 "bdev_name": "Malloc0", 00:27:20.631 "name": "Malloc0", 00:27:20.631 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:20.631 "eui64": "ABCDEF0123456789", 00:27:20.631 "uuid": "fe43226f-6b5f-47c8-9ffb-41084146bf69" 00:27:20.631 } 00:27:20.631 ] 00:27:20.631 } 00:27:20.631 ] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.631 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:20.892 [2024-10-07 09:48:15.460659] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:20.892 [2024-10-07 09:48:15.460710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609967 ] 00:27:20.892 [2024-10-07 09:48:15.498157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:20.892 [2024-10-07 09:48:15.498241] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:20.892 [2024-10-07 09:48:15.498252] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:20.892 [2024-10-07 09:48:15.498267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:20.892 [2024-10-07 09:48:15.498294] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:20.892 [2024-10-07 09:48:15.499060] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:20.892 [2024-10-07 09:48:15.499109] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa42760 0 00:27:20.892 [2024-10-07 09:48:15.512906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:20.892 [2024-10-07 09:48:15.512930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:20.892 [2024-10-07 09:48:15.512946] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:20.892 [2024-10-07 09:48:15.512952] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:20.892 [2024-10-07 09:48:15.512988] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.892 [2024-10-07 09:48:15.513001] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.892 [2024-10-07 09:48:15.513008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.892 [2024-10-07 09:48:15.513025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:20.892 [2024-10-07 09:48:15.513052] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.892 [2024-10-07 09:48:15.516919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.892 [2024-10-07 09:48:15.516948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.892 [2024-10-07 09:48:15.516955] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.892 [2024-10-07 09:48:15.516962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.892 [2024-10-07 09:48:15.516982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:20.892 [2024-10-07 09:48:15.516994] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:20.892 [2024-10-07 09:48:15.517003] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:20.892 [2024-10-07 09:48:15.517024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.892 [2024-10-07 09:48:15.517033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.892 [2024-10-07 09:48:15.517039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.892 [2024-10-07 09:48:15.517050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.892 [2024-10-07 09:48:15.517074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.517195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.517223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.517230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.517245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:20.893 [2024-10-07 09:48:15.517258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:20.893 [2024-10-07 09:48:15.517269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.517292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.517313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.517450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.517462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.517468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517474] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.517487] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:20.893 [2024-10-07 09:48:15.517501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.517513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.517536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.517555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.517641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.517653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.517659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.517673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.517688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.517712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.517731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.517847] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.517860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.517866] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.517873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.517880] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:20.893 [2024-10-07 09:48:15.517888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.517930] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.518041] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:20.893 [2024-10-07 09:48:15.518049] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.518063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.518086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.518107] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.518209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.518240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.518247] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.518262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:20.893 [2024-10-07 09:48:15.518279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.518303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.518323] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.518452] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.518465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.518471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518477] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.518485] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:20.893 [2024-10-07 09:48:15.518492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:20.893 [2024-10-07 09:48:15.518505] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:20.893 [2024-10-07 09:48:15.518519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:20.893 [2024-10-07 09:48:15.518535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.518552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.893 [2024-10-07 09:48:15.518573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.518711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:20.893 [2024-10-07 09:48:15.518725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:20.893 [2024-10-07 09:48:15.518731] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518737] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa42760): datao=0, datal=4096, cccid=0 00:27:20.893 [2024-10-07 09:48:15.518754] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa2480) on tqpair(0xa42760): expected_datao=0, payload_size=4096 00:27:20.893 [2024-10-07 09:48:15.518761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518771] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518779] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518810] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.518821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.518827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518833] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.518845] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:20.893 [2024-10-07 09:48:15.518857] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:20.893 [2024-10-07 09:48:15.518865] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:20.893 [2024-10-07 09:48:15.518873] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:20.893 [2024-10-07 09:48:15.518880] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:20.893 [2024-10-07 09:48:15.518887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:20.893 [2024-10-07 09:48:15.518926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:20.893 [2024-10-07 09:48:15.518946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.518959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.893 [2024-10-07 09:48:15.518970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:20.893 [2024-10-07 09:48:15.518991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.893 [2024-10-07 09:48:15.519121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.893 [2024-10-07 09:48:15.519135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.893 [2024-10-07 09:48:15.519141] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.519148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.893 [2024-10-07 09:48:15.519160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.519167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.893 [2024-10-07 09:48:15.519173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.894 [2024-10-07 09:48:15.519192] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.894 [2024-10-07 09:48:15.519223] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.894 [2024-10-07 09:48:15.519267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519274] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.894 [2024-10-07 09:48:15.519296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:20.894 [2024-10-07 09:48:15.519315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:20.894 [2024-10-07 09:48:15.519330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.894 [2024-10-07 09:48:15.519368] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2480, cid 0, qid 0 00:27:20.894 [2024-10-07 09:48:15.519379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2600, cid 1, qid 0 00:27:20.894 [2024-10-07 09:48:15.519386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2780, cid 2, qid 0 00:27:20.894 [2024-10-07 09:48:15.519393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.894 [2024-10-07 09:48:15.519400] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2a80, cid 4, qid 0 00:27:20.894 [2024-10-07 09:48:15.519578] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.519591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.519598] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2a80) on tqpair=0xa42760 00:27:20.894 [2024-10-07 09:48:15.519612] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:20.894 [2024-10-07 09:48:15.519620] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:20.894 [2024-10-07 09:48:15.519637] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.894 [2024-10-07 09:48:15.519676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2a80, cid 4, qid 0 00:27:20.894 [2024-10-07 09:48:15.519764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:20.894 [2024-10-07 09:48:15.519776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:20.894 [2024-10-07 09:48:15.519782] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519788] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa42760): datao=0, datal=4096, cccid=4 00:27:20.894 [2024-10-07 09:48:15.519795] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa2a80) on tqpair(0xa42760): expected_datao=0, payload_size=4096 00:27:20.894 [2024-10-07 09:48:15.519802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519818] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519826] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.519848] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.519854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2a80) on tqpair=0xa42760 00:27:20.894 [2024-10-07 09:48:15.519903] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:20.894 [2024-10-07 09:48:15.519958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.519970] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.519981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.894 [2024-10-07 09:48:15.519999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.520023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.894 [2024-10-07 09:48:15.520046] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2a80, cid 4, qid 0 00:27:20.894 [2024-10-07 09:48:15.520058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2c00, cid 5, qid 0 00:27:20.894 [2024-10-07 09:48:15.520202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:20.894 [2024-10-07 09:48:15.520215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:20.894 [2024-10-07 09:48:15.520222] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520228] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa42760): datao=0, datal=1024, cccid=4 00:27:20.894 [2024-10-07 09:48:15.520235] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa2a80) on tqpair(0xa42760): expected_datao=0, payload_size=1024 00:27:20.894 [2024-10-07 09:48:15.520242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520251] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520271] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.520288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.520294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.520300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2c00) on tqpair=0xa42760 00:27:20.894 [2024-10-07 09:48:15.561042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.561060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.561068] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2a80) on tqpair=0xa42760 00:27:20.894 [2024-10-07 09:48:15.561097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561107] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.561118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.894 [2024-10-07 09:48:15.561149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2a80, cid 4, qid 0 00:27:20.894 [2024-10-07 09:48:15.561273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:20.894 [2024-10-07 09:48:15.561287] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:20.894 [2024-10-07 09:48:15.561294] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa42760): datao=0, datal=3072, cccid=4 00:27:20.894 [2024-10-07 09:48:15.561307] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa2a80) on tqpair(0xa42760): expected_datao=0, payload_size=3072 00:27:20.894 [2024-10-07 09:48:15.561313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561322] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561330] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561356] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.561367] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.561373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2a80) on tqpair=0xa42760 00:27:20.894 [2024-10-07 09:48:15.561398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa42760) 00:27:20.894 [2024-10-07 09:48:15.561416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.894 [2024-10-07 09:48:15.561444] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2a80, cid 4, qid 0 00:27:20.894 [2024-10-07 09:48:15.561550] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:20.894 [2024-10-07 09:48:15.561561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:20.894 [2024-10-07 09:48:15.561568] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa42760): datao=0, datal=8, cccid=4 00:27:20.894 [2024-10-07 09:48:15.561580] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa2a80) on tqpair(0xa42760): expected_datao=0, payload_size=8 00:27:20.894 [2024-10-07 09:48:15.561587] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561596] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.561602] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.602054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.894 [2024-10-07 09:48:15.602072] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.894 [2024-10-07 09:48:15.602079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.894 [2024-10-07 09:48:15.602086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2a80) on tqpair=0xa42760 00:27:20.894 ===================================================== 00:27:20.894 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:20.895 ===================================================== 00:27:20.895 Controller Capabilities/Features 00:27:20.895 ================================ 00:27:20.895 Vendor ID: 0000 00:27:20.895 Subsystem Vendor ID: 0000 00:27:20.895 Serial Number: .................... 00:27:20.895 Model Number: ........................................ 00:27:20.895 Firmware Version: 25.01 00:27:20.895 Recommended Arb Burst: 0 00:27:20.895 IEEE OUI Identifier: 00 00 00 00:27:20.895 Multi-path I/O 00:27:20.895 May have multiple subsystem ports: No 00:27:20.895 May have multiple controllers: No 00:27:20.895 Associated with SR-IOV VF: No 00:27:20.895 Max Data Transfer Size: 131072 00:27:20.895 Max Number of Namespaces: 0 00:27:20.895 Max Number of I/O Queues: 1024 00:27:20.895 NVMe Specification Version (VS): 1.3 00:27:20.895 NVMe Specification Version (Identify): 1.3 00:27:20.895 Maximum Queue Entries: 128 00:27:20.895 Contiguous Queues Required: Yes 00:27:20.895 Arbitration Mechanisms Supported 00:27:20.895 Weighted Round Robin: Not Supported 00:27:20.895 Vendor Specific: Not Supported 00:27:20.895 Reset Timeout: 15000 ms 00:27:20.895 Doorbell Stride: 4 bytes 00:27:20.895 NVM Subsystem Reset: Not Supported 00:27:20.895 Command Sets Supported 00:27:20.895 NVM Command Set: Supported 00:27:20.895 Boot Partition: Not Supported 00:27:20.895 Memory Page Size Minimum: 4096 bytes 00:27:20.895 Memory Page Size Maximum: 4096 bytes 00:27:20.895 Persistent Memory Region: Not Supported 00:27:20.895 Optional Asynchronous Events Supported 00:27:20.895 Namespace Attribute Notices: Not Supported 00:27:20.895 Firmware Activation Notices: Not Supported 00:27:20.895 ANA Change Notices: Not Supported 00:27:20.895 PLE Aggregate Log Change Notices: Not Supported 00:27:20.895 LBA Status Info Alert Notices: Not Supported 00:27:20.895 EGE Aggregate Log Change Notices: Not Supported 00:27:20.895 Normal NVM Subsystem Shutdown event: Not Supported 00:27:20.895 Zone Descriptor Change Notices: Not Supported 00:27:20.895 Discovery Log Change Notices: Supported 00:27:20.895 Controller Attributes 00:27:20.895 128-bit Host Identifier: Not Supported 00:27:20.895 Non-Operational Permissive Mode: Not Supported 00:27:20.895 NVM Sets: Not Supported 00:27:20.895 Read Recovery Levels: Not Supported 00:27:20.895 Endurance Groups: Not Supported 00:27:20.895 Predictable Latency Mode: Not Supported 00:27:20.895 Traffic Based Keep ALive: Not Supported 00:27:20.895 Namespace Granularity: Not Supported 00:27:20.895 SQ Associations: Not Supported 00:27:20.895 UUID List: Not Supported 00:27:20.895 Multi-Domain Subsystem: Not Supported 00:27:20.895 Fixed Capacity Management: Not Supported 00:27:20.895 Variable Capacity Management: Not Supported 00:27:20.895 Delete Endurance Group: Not Supported 00:27:20.895 Delete NVM Set: Not Supported 00:27:20.895 Extended LBA Formats Supported: Not Supported 00:27:20.895 Flexible Data Placement Supported: Not Supported 00:27:20.895 00:27:20.895 Controller Memory Buffer Support 00:27:20.895 ================================ 00:27:20.895 Supported: No 00:27:20.895 00:27:20.895 Persistent Memory Region Support 00:27:20.895 ================================ 00:27:20.895 Supported: No 00:27:20.895 00:27:20.895 Admin Command Set Attributes 00:27:20.895 ============================ 00:27:20.895 Security Send/Receive: Not Supported 00:27:20.895 Format NVM: Not Supported 00:27:20.895 Firmware Activate/Download: Not Supported 00:27:20.895 Namespace Management: Not Supported 00:27:20.895 Device Self-Test: Not Supported 00:27:20.895 Directives: Not Supported 00:27:20.895 NVMe-MI: Not Supported 00:27:20.895 Virtualization Management: Not Supported 00:27:20.895 Doorbell Buffer Config: Not Supported 00:27:20.895 Get LBA Status Capability: Not Supported 00:27:20.895 Command & Feature Lockdown Capability: Not Supported 00:27:20.895 Abort Command Limit: 1 00:27:20.895 Async Event Request Limit: 4 00:27:20.895 Number of Firmware Slots: N/A 00:27:20.895 Firmware Slot 1 Read-Only: N/A 00:27:20.895 Firmware Activation Without Reset: N/A 00:27:20.895 Multiple Update Detection Support: N/A 00:27:20.895 Firmware Update Granularity: No Information Provided 00:27:20.895 Per-Namespace SMART Log: No 00:27:20.895 Asymmetric Namespace Access Log Page: Not Supported 00:27:20.895 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:20.895 Command Effects Log Page: Not Supported 00:27:20.895 Get Log Page Extended Data: Supported 00:27:20.895 Telemetry Log Pages: Not Supported 00:27:20.895 Persistent Event Log Pages: Not Supported 00:27:20.895 Supported Log Pages Log Page: May Support 00:27:20.895 Commands Supported & Effects Log Page: Not Supported 00:27:20.895 Feature Identifiers & Effects Log Page:May Support 00:27:20.895 NVMe-MI Commands & Effects Log Page: May Support 00:27:20.895 Data Area 4 for Telemetry Log: Not Supported 00:27:20.895 Error Log Page Entries Supported: 128 00:27:20.895 Keep Alive: Not Supported 00:27:20.895 00:27:20.895 NVM Command Set Attributes 00:27:20.895 ========================== 00:27:20.895 Submission Queue Entry Size 00:27:20.895 Max: 1 00:27:20.895 Min: 1 00:27:20.895 Completion Queue Entry Size 00:27:20.895 Max: 1 00:27:20.895 Min: 1 00:27:20.895 Number of Namespaces: 0 00:27:20.895 Compare Command: Not Supported 00:27:20.895 Write Uncorrectable Command: Not Supported 00:27:20.895 Dataset Management Command: Not Supported 00:27:20.895 Write Zeroes Command: Not Supported 00:27:20.895 Set Features Save Field: Not Supported 00:27:20.895 Reservations: Not Supported 00:27:20.895 Timestamp: Not Supported 00:27:20.895 Copy: Not Supported 00:27:20.895 Volatile Write Cache: Not Present 00:27:20.895 Atomic Write Unit (Normal): 1 00:27:20.895 Atomic Write Unit (PFail): 1 00:27:20.895 Atomic Compare & Write Unit: 1 00:27:20.895 Fused Compare & Write: Supported 00:27:20.895 Scatter-Gather List 00:27:20.895 SGL Command Set: Supported 00:27:20.895 SGL Keyed: Supported 00:27:20.895 SGL Bit Bucket Descriptor: Not Supported 00:27:20.895 SGL Metadata Pointer: Not Supported 00:27:20.895 Oversized SGL: Not Supported 00:27:20.895 SGL Metadata Address: Not Supported 00:27:20.895 SGL Offset: Supported 00:27:20.895 Transport SGL Data Block: Not Supported 00:27:20.895 Replay Protected Memory Block: Not Supported 00:27:20.895 00:27:20.895 Firmware Slot Information 00:27:20.895 ========================= 00:27:20.895 Active slot: 0 00:27:20.895 00:27:20.895 00:27:20.895 Error Log 00:27:20.895 ========= 00:27:20.895 00:27:20.895 Active Namespaces 00:27:20.895 ================= 00:27:20.895 Discovery Log Page 00:27:20.895 ================== 00:27:20.895 Generation Counter: 2 00:27:20.895 Number of Records: 2 00:27:20.895 Record Format: 0 00:27:20.895 00:27:20.895 Discovery Log Entry 0 00:27:20.895 ---------------------- 00:27:20.895 Transport Type: 3 (TCP) 00:27:20.895 Address Family: 1 (IPv4) 00:27:20.895 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:20.895 Entry Flags: 00:27:20.895 Duplicate Returned Information: 1 00:27:20.895 Explicit Persistent Connection Support for Discovery: 1 00:27:20.895 Transport Requirements: 00:27:20.895 Secure Channel: Not Required 00:27:20.895 Port ID: 0 (0x0000) 00:27:20.895 Controller ID: 65535 (0xffff) 00:27:20.895 Admin Max SQ Size: 128 00:27:20.895 Transport Service Identifier: 4420 00:27:20.895 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:20.895 Transport Address: 10.0.0.2 00:27:20.895 Discovery Log Entry 1 00:27:20.895 ---------------------- 00:27:20.895 Transport Type: 3 (TCP) 00:27:20.895 Address Family: 1 (IPv4) 00:27:20.895 Subsystem Type: 2 (NVM Subsystem) 00:27:20.895 Entry Flags: 00:27:20.895 Duplicate Returned Information: 0 00:27:20.895 Explicit Persistent Connection Support for Discovery: 0 00:27:20.895 Transport Requirements: 00:27:20.895 Secure Channel: Not Required 00:27:20.895 Port ID: 0 (0x0000) 00:27:20.895 Controller ID: 65535 (0xffff) 00:27:20.895 Admin Max SQ Size: 128 00:27:20.895 Transport Service Identifier: 4420 00:27:20.895 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:20.895 Transport Address: 10.0.0.2 [2024-10-07 09:48:15.602214] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:20.895 [2024-10-07 09:48:15.602235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2480) on tqpair=0xa42760 00:27:20.895 [2024-10-07 09:48:15.602246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.895 [2024-10-07 09:48:15.602255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2600) on tqpair=0xa42760 00:27:20.895 [2024-10-07 09:48:15.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.895 [2024-10-07 09:48:15.602269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2780) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.602276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.896 [2024-10-07 09:48:15.602284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.602290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.896 [2024-10-07 09:48:15.602303] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.602326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.602350] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.602468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.602482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.602488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.602510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602518] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.602533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.602559] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.602663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.602676] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.602682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.602696] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:20.896 [2024-10-07 09:48:15.602708] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:20.896 [2024-10-07 09:48:15.602725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602739] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.602749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.602769] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.602865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.602900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.602908] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.602932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.602947] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.602957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.602978] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.603109] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.603122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.603129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603135] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.603151] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.603191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.603212] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.603338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.603355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.603362] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.603385] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603393] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603399] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.603409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.603429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.603510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.603523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.603529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.603550] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603565] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.603574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.603594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.603672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.603685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.603691] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.603712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603721] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603727] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.603736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.603756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.603835] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.603848] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.603854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.603875] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.603884] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.607898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa42760) 00:27:20.896 [2024-10-07 09:48:15.607917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.896 [2024-10-07 09:48:15.607940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa2900, cid 3, qid 0 00:27:20.896 [2024-10-07 09:48:15.608093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:20.896 [2024-10-07 09:48:15.608107] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:20.896 [2024-10-07 09:48:15.608118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:20.896 [2024-10-07 09:48:15.608125] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa2900) on tqpair=0xa42760 00:27:20.896 [2024-10-07 09:48:15.608139] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:20.896 00:27:20.896 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:20.896 [2024-10-07 09:48:15.666743] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:20.896 [2024-10-07 09:48:15.666844] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610070 ] 00:27:21.159 [2024-10-07 09:48:15.716091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:21.159 [2024-10-07 09:48:15.716151] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:21.159 [2024-10-07 09:48:15.716162] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:21.159 [2024-10-07 09:48:15.716193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:21.159 [2024-10-07 09:48:15.716206] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:21.159 [2024-10-07 09:48:15.720196] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:21.159 [2024-10-07 09:48:15.720236] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd6a760 0 00:27:21.159 [2024-10-07 09:48:15.726906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:21.159 [2024-10-07 09:48:15.726928] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:21.159 [2024-10-07 09:48:15.726935] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:21.159 [2024-10-07 09:48:15.726942] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:21.159 [2024-10-07 09:48:15.726971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.726983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.726990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.159 [2024-10-07 09:48:15.727004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:21.159 [2024-10-07 09:48:15.727030] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.159 [2024-10-07 09:48:15.733904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.159 [2024-10-07 09:48:15.733922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.159 [2024-10-07 09:48:15.733929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.733936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.159 [2024-10-07 09:48:15.733955] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:21.159 [2024-10-07 09:48:15.733966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:21.159 [2024-10-07 09:48:15.733975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:21.159 [2024-10-07 09:48:15.733993] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.159 [2024-10-07 09:48:15.734025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.159 [2024-10-07 09:48:15.734048] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.159 [2024-10-07 09:48:15.734198] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.159 [2024-10-07 09:48:15.734210] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.159 [2024-10-07 09:48:15.734217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.159 [2024-10-07 09:48:15.734230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:21.159 [2024-10-07 09:48:15.734243] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:21.159 [2024-10-07 09:48:15.734255] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.159 [2024-10-07 09:48:15.734277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.159 [2024-10-07 09:48:15.734299] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.159 [2024-10-07 09:48:15.734422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.159 [2024-10-07 09:48:15.734435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.159 [2024-10-07 09:48:15.734442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.159 [2024-10-07 09:48:15.734455] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:21.159 [2024-10-07 09:48:15.734469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:21.159 [2024-10-07 09:48:15.734481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734493] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.159 [2024-10-07 09:48:15.734503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.159 [2024-10-07 09:48:15.734523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.159 [2024-10-07 09:48:15.734600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.159 [2024-10-07 09:48:15.734611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.159 [2024-10-07 09:48:15.734618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734624] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.159 [2024-10-07 09:48:15.734631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:21.159 [2024-10-07 09:48:15.734648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.159 [2024-10-07 09:48:15.734662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.734672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.160 [2024-10-07 09:48:15.734695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.160 [2024-10-07 09:48:15.734774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.160 [2024-10-07 09:48:15.734788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.160 [2024-10-07 09:48:15.734794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.734800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.160 [2024-10-07 09:48:15.734807] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:21.160 [2024-10-07 09:48:15.734815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:21.160 [2024-10-07 09:48:15.734828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:21.160 [2024-10-07 09:48:15.734938] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:21.160 [2024-10-07 09:48:15.734947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:21.160 [2024-10-07 09:48:15.734959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.734966] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.734973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.734983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.160 [2024-10-07 09:48:15.735005] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.160 [2024-10-07 09:48:15.735125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.160 [2024-10-07 09:48:15.735137] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.160 [2024-10-07 09:48:15.735144] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735150] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.160 [2024-10-07 09:48:15.735158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:21.160 [2024-10-07 09:48:15.735174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.735200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.160 [2024-10-07 09:48:15.735235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.160 [2024-10-07 09:48:15.735320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.160 [2024-10-07 09:48:15.735331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.160 [2024-10-07 09:48:15.735338] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.160 [2024-10-07 09:48:15.735351] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:21.160 [2024-10-07 09:48:15.735358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:21.160 [2024-10-07 09:48:15.735371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:21.160 [2024-10-07 09:48:15.735384] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:21.160 [2024-10-07 09:48:15.735401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.735419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.160 [2024-10-07 09:48:15.735440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.160 [2024-10-07 09:48:15.735572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.160 [2024-10-07 09:48:15.735583] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.160 [2024-10-07 09:48:15.735590] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=4096, cccid=0 00:27:21.160 [2024-10-07 09:48:15.735602] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdca480) on tqpair(0xd6a760): expected_datao=0, payload_size=4096 00:27:21.160 [2024-10-07 09:48:15.735609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735619] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735625] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.160 [2024-10-07 09:48:15.735645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.160 [2024-10-07 09:48:15.735651] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735657] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.160 [2024-10-07 09:48:15.735667] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:21.160 [2024-10-07 09:48:15.735675] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:21.160 [2024-10-07 09:48:15.735682] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:21.160 [2024-10-07 09:48:15.735688] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:21.160 [2024-10-07 09:48:15.735695] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:21.160 [2024-10-07 09:48:15.735702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:21.160 [2024-10-07 09:48:15.735716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:21.160 [2024-10-07 09:48:15.735727] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.735750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:21.160 [2024-10-07 09:48:15.735770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.160 [2024-10-07 09:48:15.735858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.160 [2024-10-07 09:48:15.735886] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.160 [2024-10-07 09:48:15.735901] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.160 [2024-10-07 09:48:15.735919] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735926] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.735945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.160 [2024-10-07 09:48:15.735957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735969] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd6a760) 00:27:21.160 [2024-10-07 09:48:15.735978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.160 [2024-10-07 09:48:15.735987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.735994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.160 [2024-10-07 09:48:15.736000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.736008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.161 [2024-10-07 09:48:15.736018] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.736039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.161 [2024-10-07 09:48:15.736047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.736096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.161 [2024-10-07 09:48:15.736118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca480, cid 0, qid 0 00:27:21.161 [2024-10-07 09:48:15.736129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca600, cid 1, qid 0 00:27:21.161 [2024-10-07 09:48:15.736137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca780, cid 2, qid 0 00:27:21.161 [2024-10-07 09:48:15.736144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.161 [2024-10-07 09:48:15.736151] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.161 [2024-10-07 09:48:15.736309] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.161 [2024-10-07 09:48:15.736323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.161 [2024-10-07 09:48:15.736330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.161 [2024-10-07 09:48:15.736343] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:21.161 [2024-10-07 09:48:15.736351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.736416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:21.161 [2024-10-07 09:48:15.736436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.161 [2024-10-07 09:48:15.736557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.161 [2024-10-07 09:48:15.736570] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.161 [2024-10-07 09:48:15.736577] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.161 [2024-10-07 09:48:15.736647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.736680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.736697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.161 [2024-10-07 09:48:15.736718] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.161 [2024-10-07 09:48:15.736845] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.161 [2024-10-07 09:48:15.736858] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.161 [2024-10-07 09:48:15.736865] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736886] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=4096, cccid=4 00:27:21.161 [2024-10-07 09:48:15.736901] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcaa80) on tqpair(0xd6a760): expected_datao=0, payload_size=4096 00:27:21.161 [2024-10-07 09:48:15.736909] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736928] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736937] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736956] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.161 [2024-10-07 09:48:15.736967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.161 [2024-10-07 09:48:15.736973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.736980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.161 [2024-10-07 09:48:15.736994] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:21.161 [2024-10-07 09:48:15.737017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.737036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.737050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737058] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.737068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.161 [2024-10-07 09:48:15.737091] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.161 [2024-10-07 09:48:15.737211] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.161 [2024-10-07 09:48:15.737224] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.161 [2024-10-07 09:48:15.737230] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737236] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=4096, cccid=4 00:27:21.161 [2024-10-07 09:48:15.737243] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcaa80) on tqpair(0xd6a760): expected_datao=0, payload_size=4096 00:27:21.161 [2024-10-07 09:48:15.737250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737268] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737277] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.161 [2024-10-07 09:48:15.737296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.161 [2024-10-07 09:48:15.737302] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737308] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.161 [2024-10-07 09:48:15.737326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.737345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:21.161 [2024-10-07 09:48:15.737358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.161 [2024-10-07 09:48:15.737365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.161 [2024-10-07 09:48:15.737375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.161 [2024-10-07 09:48:15.737397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.161 [2024-10-07 09:48:15.737485] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.161 [2024-10-07 09:48:15.737497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.161 [2024-10-07 09:48:15.737503] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737508] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=4096, cccid=4 00:27:21.162 [2024-10-07 09:48:15.737515] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcaa80) on tqpair(0xd6a760): expected_datao=0, payload_size=4096 00:27:21.162 [2024-10-07 09:48:15.737522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737537] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737546] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737556] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.737565] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.737571] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.737588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737655] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:21.162 [2024-10-07 09:48:15.737662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:21.162 [2024-10-07 09:48:15.737670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:21.162 [2024-10-07 09:48:15.737688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737696] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.737706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.737716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737723] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.737729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.737737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.162 [2024-10-07 09:48:15.737757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.162 [2024-10-07 09:48:15.737768] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcac00, cid 5, qid 0 00:27:21.162 [2024-10-07 09:48:15.741904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.741921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.741927] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.741934] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.741944] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.741953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.741959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.741965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcac00) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.741982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.741991] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcac00, cid 5, qid 0 00:27:21.162 [2024-10-07 09:48:15.742164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.742192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.742198] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742205] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcac00) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.742221] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcac00, cid 5, qid 0 00:27:21.162 [2024-10-07 09:48:15.742377] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.742390] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.742397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcac00) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.742418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcac00, cid 5, qid 0 00:27:21.162 [2024-10-07 09:48:15.742542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.162 [2024-10-07 09:48:15.742553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.162 [2024-10-07 09:48:15.742559] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcac00) on tqpair=0xd6a760 00:27:21.162 [2024-10-07 09:48:15.742588] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742620] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742627] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742647] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742678] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.162 [2024-10-07 09:48:15.742687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd6a760) 00:27:21.162 [2024-10-07 09:48:15.742696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.162 [2024-10-07 09:48:15.742716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcac00, cid 5, qid 0 00:27:21.162 [2024-10-07 09:48:15.742727] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaa80, cid 4, qid 0 00:27:21.162 [2024-10-07 09:48:15.742734] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcad80, cid 6, qid 0 00:27:21.162 [2024-10-07 09:48:15.742741] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaf00, cid 7, qid 0 00:27:21.162 [2024-10-07 09:48:15.742979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.163 [2024-10-07 09:48:15.742995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.163 [2024-10-07 09:48:15.743001] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=8192, cccid=5 00:27:21.163 [2024-10-07 09:48:15.743014] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcac00) on tqpair(0xd6a760): expected_datao=0, payload_size=8192 00:27:21.163 [2024-10-07 09:48:15.743025] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743048] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743058] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.163 [2024-10-07 09:48:15.743074] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.163 [2024-10-07 09:48:15.743080] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743086] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=512, cccid=4 00:27:21.163 [2024-10-07 09:48:15.743093] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcaa80) on tqpair(0xd6a760): expected_datao=0, payload_size=512 00:27:21.163 [2024-10-07 09:48:15.743100] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743109] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743116] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.163 [2024-10-07 09:48:15.743132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.163 [2024-10-07 09:48:15.743138] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=512, cccid=6 00:27:21.163 [2024-10-07 09:48:15.743151] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcad80) on tqpair(0xd6a760): expected_datao=0, payload_size=512 00:27:21.163 [2024-10-07 09:48:15.743157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743166] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743187] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:21.163 [2024-10-07 09:48:15.743203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:21.163 [2024-10-07 09:48:15.743209] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743215] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd6a760): datao=0, datal=4096, cccid=7 00:27:21.163 [2024-10-07 09:48:15.743222] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcaf00) on tqpair(0xd6a760): expected_datao=0, payload_size=4096 00:27:21.163 [2024-10-07 09:48:15.743228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743237] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743243] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.163 [2024-10-07 09:48:15.743263] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.163 [2024-10-07 09:48:15.743269] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcac00) on tqpair=0xd6a760 00:27:21.163 [2024-10-07 09:48:15.743293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.163 [2024-10-07 09:48:15.743303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.163 [2024-10-07 09:48:15.743310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaa80) on tqpair=0xd6a760 00:27:21.163 [2024-10-07 09:48:15.743330] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.163 [2024-10-07 09:48:15.743339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.163 [2024-10-07 09:48:15.743345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcad80) on tqpair=0xd6a760 00:27:21.163 [2024-10-07 09:48:15.743365] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.163 [2024-10-07 09:48:15.743374] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.163 [2024-10-07 09:48:15.743380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.163 [2024-10-07 09:48:15.743386] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaf00) on tqpair=0xd6a760 00:27:21.163 ===================================================== 00:27:21.163 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.163 ===================================================== 00:27:21.163 Controller Capabilities/Features 00:27:21.163 ================================ 00:27:21.163 Vendor ID: 8086 00:27:21.163 Subsystem Vendor ID: 8086 00:27:21.163 Serial Number: SPDK00000000000001 00:27:21.163 Model Number: SPDK bdev Controller 00:27:21.163 Firmware Version: 25.01 00:27:21.163 Recommended Arb Burst: 6 00:27:21.163 IEEE OUI Identifier: e4 d2 5c 00:27:21.163 Multi-path I/O 00:27:21.163 May have multiple subsystem ports: Yes 00:27:21.163 May have multiple controllers: Yes 00:27:21.163 Associated with SR-IOV VF: No 00:27:21.163 Max Data Transfer Size: 131072 00:27:21.163 Max Number of Namespaces: 32 00:27:21.163 Max Number of I/O Queues: 127 00:27:21.163 NVMe Specification Version (VS): 1.3 00:27:21.163 NVMe Specification Version (Identify): 1.3 00:27:21.163 Maximum Queue Entries: 128 00:27:21.163 Contiguous Queues Required: Yes 00:27:21.163 Arbitration Mechanisms Supported 00:27:21.163 Weighted Round Robin: Not Supported 00:27:21.163 Vendor Specific: Not Supported 00:27:21.163 Reset Timeout: 15000 ms 00:27:21.163 Doorbell Stride: 4 bytes 00:27:21.163 NVM Subsystem Reset: Not Supported 00:27:21.163 Command Sets Supported 00:27:21.163 NVM Command Set: Supported 00:27:21.163 Boot Partition: Not Supported 00:27:21.163 Memory Page Size Minimum: 4096 bytes 00:27:21.163 Memory Page Size Maximum: 4096 bytes 00:27:21.163 Persistent Memory Region: Not Supported 00:27:21.163 Optional Asynchronous Events Supported 00:27:21.163 Namespace Attribute Notices: Supported 00:27:21.163 Firmware Activation Notices: Not Supported 00:27:21.163 ANA Change Notices: Not Supported 00:27:21.163 PLE Aggregate Log Change Notices: Not Supported 00:27:21.163 LBA Status Info Alert Notices: Not Supported 00:27:21.163 EGE Aggregate Log Change Notices: Not Supported 00:27:21.163 Normal NVM Subsystem Shutdown event: Not Supported 00:27:21.163 Zone Descriptor Change Notices: Not Supported 00:27:21.163 Discovery Log Change Notices: Not Supported 00:27:21.163 Controller Attributes 00:27:21.163 128-bit Host Identifier: Supported 00:27:21.163 Non-Operational Permissive Mode: Not Supported 00:27:21.163 NVM Sets: Not Supported 00:27:21.163 Read Recovery Levels: Not Supported 00:27:21.163 Endurance Groups: Not Supported 00:27:21.163 Predictable Latency Mode: Not Supported 00:27:21.163 Traffic Based Keep ALive: Not Supported 00:27:21.163 Namespace Granularity: Not Supported 00:27:21.163 SQ Associations: Not Supported 00:27:21.163 UUID List: Not Supported 00:27:21.163 Multi-Domain Subsystem: Not Supported 00:27:21.163 Fixed Capacity Management: Not Supported 00:27:21.163 Variable Capacity Management: Not Supported 00:27:21.163 Delete Endurance Group: Not Supported 00:27:21.163 Delete NVM Set: Not Supported 00:27:21.163 Extended LBA Formats Supported: Not Supported 00:27:21.163 Flexible Data Placement Supported: Not Supported 00:27:21.163 00:27:21.163 Controller Memory Buffer Support 00:27:21.163 ================================ 00:27:21.163 Supported: No 00:27:21.163 00:27:21.164 Persistent Memory Region Support 00:27:21.164 ================================ 00:27:21.164 Supported: No 00:27:21.164 00:27:21.164 Admin Command Set Attributes 00:27:21.164 ============================ 00:27:21.164 Security Send/Receive: Not Supported 00:27:21.164 Format NVM: Not Supported 00:27:21.164 Firmware Activate/Download: Not Supported 00:27:21.164 Namespace Management: Not Supported 00:27:21.164 Device Self-Test: Not Supported 00:27:21.164 Directives: Not Supported 00:27:21.164 NVMe-MI: Not Supported 00:27:21.164 Virtualization Management: Not Supported 00:27:21.164 Doorbell Buffer Config: Not Supported 00:27:21.164 Get LBA Status Capability: Not Supported 00:27:21.164 Command & Feature Lockdown Capability: Not Supported 00:27:21.164 Abort Command Limit: 4 00:27:21.164 Async Event Request Limit: 4 00:27:21.164 Number of Firmware Slots: N/A 00:27:21.164 Firmware Slot 1 Read-Only: N/A 00:27:21.164 Firmware Activation Without Reset: N/A 00:27:21.164 Multiple Update Detection Support: N/A 00:27:21.164 Firmware Update Granularity: No Information Provided 00:27:21.164 Per-Namespace SMART Log: No 00:27:21.164 Asymmetric Namespace Access Log Page: Not Supported 00:27:21.164 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:21.164 Command Effects Log Page: Supported 00:27:21.164 Get Log Page Extended Data: Supported 00:27:21.164 Telemetry Log Pages: Not Supported 00:27:21.164 Persistent Event Log Pages: Not Supported 00:27:21.164 Supported Log Pages Log Page: May Support 00:27:21.164 Commands Supported & Effects Log Page: Not Supported 00:27:21.164 Feature Identifiers & Effects Log Page:May Support 00:27:21.164 NVMe-MI Commands & Effects Log Page: May Support 00:27:21.164 Data Area 4 for Telemetry Log: Not Supported 00:27:21.164 Error Log Page Entries Supported: 128 00:27:21.164 Keep Alive: Supported 00:27:21.164 Keep Alive Granularity: 10000 ms 00:27:21.164 00:27:21.164 NVM Command Set Attributes 00:27:21.164 ========================== 00:27:21.164 Submission Queue Entry Size 00:27:21.164 Max: 64 00:27:21.164 Min: 64 00:27:21.164 Completion Queue Entry Size 00:27:21.164 Max: 16 00:27:21.164 Min: 16 00:27:21.164 Number of Namespaces: 32 00:27:21.164 Compare Command: Supported 00:27:21.164 Write Uncorrectable Command: Not Supported 00:27:21.164 Dataset Management Command: Supported 00:27:21.164 Write Zeroes Command: Supported 00:27:21.164 Set Features Save Field: Not Supported 00:27:21.164 Reservations: Supported 00:27:21.164 Timestamp: Not Supported 00:27:21.164 Copy: Supported 00:27:21.164 Volatile Write Cache: Present 00:27:21.164 Atomic Write Unit (Normal): 1 00:27:21.164 Atomic Write Unit (PFail): 1 00:27:21.164 Atomic Compare & Write Unit: 1 00:27:21.164 Fused Compare & Write: Supported 00:27:21.164 Scatter-Gather List 00:27:21.164 SGL Command Set: Supported 00:27:21.164 SGL Keyed: Supported 00:27:21.164 SGL Bit Bucket Descriptor: Not Supported 00:27:21.164 SGL Metadata Pointer: Not Supported 00:27:21.164 Oversized SGL: Not Supported 00:27:21.164 SGL Metadata Address: Not Supported 00:27:21.164 SGL Offset: Supported 00:27:21.164 Transport SGL Data Block: Not Supported 00:27:21.164 Replay Protected Memory Block: Not Supported 00:27:21.164 00:27:21.164 Firmware Slot Information 00:27:21.164 ========================= 00:27:21.164 Active slot: 1 00:27:21.164 Slot 1 Firmware Revision: 25.01 00:27:21.164 00:27:21.164 00:27:21.164 Commands Supported and Effects 00:27:21.164 ============================== 00:27:21.164 Admin Commands 00:27:21.164 -------------- 00:27:21.164 Get Log Page (02h): Supported 00:27:21.164 Identify (06h): Supported 00:27:21.164 Abort (08h): Supported 00:27:21.164 Set Features (09h): Supported 00:27:21.164 Get Features (0Ah): Supported 00:27:21.164 Asynchronous Event Request (0Ch): Supported 00:27:21.164 Keep Alive (18h): Supported 00:27:21.164 I/O Commands 00:27:21.164 ------------ 00:27:21.164 Flush (00h): Supported LBA-Change 00:27:21.164 Write (01h): Supported LBA-Change 00:27:21.164 Read (02h): Supported 00:27:21.164 Compare (05h): Supported 00:27:21.164 Write Zeroes (08h): Supported LBA-Change 00:27:21.164 Dataset Management (09h): Supported LBA-Change 00:27:21.164 Copy (19h): Supported LBA-Change 00:27:21.164 00:27:21.164 Error Log 00:27:21.164 ========= 00:27:21.164 00:27:21.164 Arbitration 00:27:21.164 =========== 00:27:21.164 Arbitration Burst: 1 00:27:21.164 00:27:21.164 Power Management 00:27:21.164 ================ 00:27:21.164 Number of Power States: 1 00:27:21.164 Current Power State: Power State #0 00:27:21.164 Power State #0: 00:27:21.164 Max Power: 0.00 W 00:27:21.164 Non-Operational State: Operational 00:27:21.164 Entry Latency: Not Reported 00:27:21.164 Exit Latency: Not Reported 00:27:21.164 Relative Read Throughput: 0 00:27:21.164 Relative Read Latency: 0 00:27:21.164 Relative Write Throughput: 0 00:27:21.164 Relative Write Latency: 0 00:27:21.164 Idle Power: Not Reported 00:27:21.164 Active Power: Not Reported 00:27:21.164 Non-Operational Permissive Mode: Not Supported 00:27:21.164 00:27:21.164 Health Information 00:27:21.164 ================== 00:27:21.164 Critical Warnings: 00:27:21.164 Available Spare Space: OK 00:27:21.164 Temperature: OK 00:27:21.164 Device Reliability: OK 00:27:21.164 Read Only: No 00:27:21.164 Volatile Memory Backup: OK 00:27:21.164 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:21.164 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:21.164 Available Spare: 0% 00:27:21.164 Available Spare Threshold: 0% 00:27:21.164 Life Percentage Used:[2024-10-07 09:48:15.743502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.164 [2024-10-07 09:48:15.743514] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd6a760) 00:27:21.164 [2024-10-07 09:48:15.743524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.164 [2024-10-07 09:48:15.743555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcaf00, cid 7, qid 0 00:27:21.164 [2024-10-07 09:48:15.743714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.164 [2024-10-07 09:48:15.743728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.164 [2024-10-07 09:48:15.743734] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.164 [2024-10-07 09:48:15.743740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcaf00) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.743783] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:21.165 [2024-10-07 09:48:15.743802] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca480) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.743812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.165 [2024-10-07 09:48:15.743820] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca600) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.743828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.165 [2024-10-07 09:48:15.743835] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca780) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.165 [2024-10-07 09:48:15.743849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.743856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.165 [2024-10-07 09:48:15.743882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.743897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.743905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.743915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.743938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.744104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.744118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.744124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744131] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.744142] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744149] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.744165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.744195] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.744331] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.744345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.744351] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.744364] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:21.165 [2024-10-07 09:48:15.744371] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:21.165 [2024-10-07 09:48:15.744387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744395] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.744410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.744430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.744545] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.744558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.744564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744571] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.744586] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744600] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.744610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.744630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.744718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.744729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.744735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744741] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.744756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.744780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.744799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.744897] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.744912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.744918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.165 [2024-10-07 09:48:15.744942] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.165 [2024-10-07 09:48:15.744958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.165 [2024-10-07 09:48:15.744972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.165 [2024-10-07 09:48:15.744995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.165 [2024-10-07 09:48:15.745084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.165 [2024-10-07 09:48:15.745096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.165 [2024-10-07 09:48:15.745103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.745125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.745151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.745190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.745279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.745292] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.745299] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745305] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.745320] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745334] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.745344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.745364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.745444] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.745457] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.745464] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.745485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.745509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.745529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.745615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.745626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.745632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.745654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.745678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.745701] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.745776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.745789] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.745795] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745801] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.745816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.745831] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.745840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.745860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.749908] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.749924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.749931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.749937] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.749954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.749963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.749969] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd6a760) 00:27:21.166 [2024-10-07 09:48:15.749979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.166 [2024-10-07 09:48:15.750001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca900, cid 3, qid 0 00:27:21.166 [2024-10-07 09:48:15.750118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:21.166 [2024-10-07 09:48:15.750132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:21.166 [2024-10-07 09:48:15.750139] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:21.166 [2024-10-07 09:48:15.750145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca900) on tqpair=0xd6a760 00:27:21.166 [2024-10-07 09:48:15.750157] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:21.166 0% 00:27:21.166 Data Units Read: 0 00:27:21.166 Data Units Written: 0 00:27:21.166 Host Read Commands: 0 00:27:21.166 Host Write Commands: 0 00:27:21.166 Controller Busy Time: 0 minutes 00:27:21.166 Power Cycles: 0 00:27:21.166 Power On Hours: 0 hours 00:27:21.166 Unsafe Shutdowns: 0 00:27:21.166 Unrecoverable Media Errors: 0 00:27:21.166 Lifetime Error Log Entries: 0 00:27:21.166 Warning Temperature Time: 0 minutes 00:27:21.166 Critical Temperature Time: 0 minutes 00:27:21.166 00:27:21.166 Number of Queues 00:27:21.166 ================ 00:27:21.166 Number of I/O Submission Queues: 127 00:27:21.166 Number of I/O Completion Queues: 127 00:27:21.166 00:27:21.166 Active Namespaces 00:27:21.166 ================= 00:27:21.166 Namespace ID:1 00:27:21.166 Error Recovery Timeout: Unlimited 00:27:21.166 Command Set Identifier: NVM (00h) 00:27:21.166 Deallocate: Supported 00:27:21.166 Deallocated/Unwritten Error: Not Supported 00:27:21.166 Deallocated Read Value: Unknown 00:27:21.166 Deallocate in Write Zeroes: Not Supported 00:27:21.166 Deallocated Guard Field: 0xFFFF 00:27:21.166 Flush: Supported 00:27:21.166 Reservation: Supported 00:27:21.166 Namespace Sharing Capabilities: Multiple Controllers 00:27:21.166 Size (in LBAs): 131072 (0GiB) 00:27:21.166 Capacity (in LBAs): 131072 (0GiB) 00:27:21.166 Utilization (in LBAs): 131072 (0GiB) 00:27:21.166 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:21.166 EUI64: ABCDEF0123456789 00:27:21.166 UUID: fe43226f-6b5f-47c8-9ffb-41084146bf69 00:27:21.166 Thin Provisioning: Not Supported 00:27:21.166 Per-NS Atomic Units: Yes 00:27:21.166 Atomic Boundary Size (Normal): 0 00:27:21.166 Atomic Boundary Size (PFail): 0 00:27:21.166 Atomic Boundary Offset: 0 00:27:21.166 Maximum Single Source Range Length: 65535 00:27:21.166 Maximum Copy Length: 65535 00:27:21.166 Maximum Source Range Count: 1 00:27:21.166 NGUID/EUI64 Never Reused: No 00:27:21.166 Namespace Write Protected: No 00:27:21.166 Number of LBA Formats: 1 00:27:21.166 Current LBA Format: LBA Format #00 00:27:21.166 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:21.166 00:27:21.166 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:21.166 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.167 rmmod nvme_tcp 00:27:21.167 rmmod nvme_fabrics 00:27:21.167 rmmod nvme_keyring 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1609920 ']' 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1609920 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1609920 ']' 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1609920 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609920 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609920' 00:27:21.167 killing process with pid 1609920 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1609920 00:27:21.167 09:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1609920 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.426 09:48:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.955 00:27:23.955 real 0m6.327s 00:27:23.955 user 0m4.908s 00:27:23.955 sys 0m2.542s 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 ************************************ 00:27:23.955 END TEST nvmf_identify 00:27:23.955 ************************************ 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.955 ************************************ 00:27:23.955 START TEST nvmf_perf 00:27:23.955 ************************************ 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:23.955 * Looking for test storage... 00:27:23.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.955 --rc genhtml_branch_coverage=1 00:27:23.955 --rc genhtml_function_coverage=1 00:27:23.955 --rc genhtml_legend=1 00:27:23.955 --rc geninfo_all_blocks=1 00:27:23.955 --rc geninfo_unexecuted_blocks=1 00:27:23.955 00:27:23.955 ' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.955 --rc genhtml_branch_coverage=1 00:27:23.955 --rc genhtml_function_coverage=1 00:27:23.955 --rc genhtml_legend=1 00:27:23.955 --rc geninfo_all_blocks=1 00:27:23.955 --rc geninfo_unexecuted_blocks=1 00:27:23.955 00:27:23.955 ' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.955 --rc genhtml_branch_coverage=1 00:27:23.955 --rc genhtml_function_coverage=1 00:27:23.955 --rc genhtml_legend=1 00:27:23.955 --rc geninfo_all_blocks=1 00:27:23.955 --rc geninfo_unexecuted_blocks=1 00:27:23.955 00:27:23.955 ' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.955 --rc genhtml_branch_coverage=1 00:27:23.955 --rc genhtml_function_coverage=1 00:27:23.955 --rc genhtml_legend=1 00:27:23.955 --rc geninfo_all_blocks=1 00:27:23.955 --rc geninfo_unexecuted_blocks=1 00:27:23.955 00:27:23.955 ' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:23.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.955 09:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.499 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:26.500 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:26.500 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:26.500 Found net devices under 0000:84:00.0: cvl_0_0 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:26.500 Found net devices under 0000:84:00.1: cvl_0_1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.500 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:27:26.758 00:27:26.758 --- 10.0.0.2 ping statistics --- 00:27:26.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.758 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:27:26.758 00:27:26.758 --- 10.0.0.1 ping statistics --- 00:27:26.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.758 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1612085 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1612085 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1612085 ']' 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.758 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.758 [2024-10-07 09:48:21.440900] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:26.758 [2024-10-07 09:48:21.441006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.758 [2024-10-07 09:48:21.519616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.016 [2024-10-07 09:48:21.643809] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.016 [2024-10-07 09:48:21.643884] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.016 [2024-10-07 09:48:21.643910] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.016 [2024-10-07 09:48:21.643924] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.016 [2024-10-07 09:48:21.643954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.016 [2024-10-07 09:48:21.645873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.016 [2024-10-07 09:48:21.645967] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.016 [2024-10-07 09:48:21.645926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.016 [2024-10-07 09:48:21.645970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:27.016 09:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:31.195 09:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:31.195 09:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:31.195 09:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:27:31.195 09:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:31.760 09:48:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:31.760 09:48:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:27:31.760 09:48:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:31.760 09:48:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:31.760 09:48:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:32.398 [2024-10-07 09:48:26.999727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.398 09:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.656 09:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:32.656 09:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.222 09:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:33.222 09:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:33.789 09:48:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.353 [2024-10-07 09:48:29.019032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.353 09:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.611 09:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:27:34.611 09:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:27:34.611 09:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:34.611 09:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:27:35.980 Initializing NVMe Controllers 00:27:35.981 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:27:35.981 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:27:35.981 Initialization complete. Launching workers. 00:27:35.981 ======================================================== 00:27:35.981 Latency(us) 00:27:35.981 Device Information : IOPS MiB/s Average min max 00:27:35.981 PCIE (0000:82:00.0) NSID 1 from core 0: 82830.82 323.56 385.73 44.56 4331.51 00:27:35.981 ======================================================== 00:27:35.981 Total : 82830.82 323.56 385.73 44.56 4331.51 00:27:35.981 00:27:35.981 09:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:37.885 Initializing NVMe Controllers 00:27:37.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:37.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:37.885 Initialization complete. Launching workers. 00:27:37.885 ======================================================== 00:27:37.885 Latency(us) 00:27:37.885 Device Information : IOPS MiB/s Average min max 00:27:37.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10517.84 144.56 44911.81 00:27:37.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18635.18 7141.98 47909.47 00:27:37.885 ======================================================== 00:27:37.885 Total : 154.00 0.60 13469.60 144.56 47909.47 00:27:37.885 00:27:37.885 09:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.819 Initializing NVMe Controllers 00:27:38.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.819 Initialization complete. Launching workers. 00:27:38.819 ======================================================== 00:27:38.819 Latency(us) 00:27:38.819 Device Information : IOPS MiB/s Average min max 00:27:38.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8546.69 33.39 3743.32 602.40 10112.57 00:27:38.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3761.82 14.69 8518.71 6710.49 18032.19 00:27:38.819 ======================================================== 00:27:38.819 Total : 12308.52 48.08 5202.81 602.40 18032.19 00:27:38.819 00:27:39.077 09:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:39.077 09:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:39.077 09:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:41.607 Initializing NVMe Controllers 00:27:41.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.607 Controller IO queue size 128, less than required. 00:27:41.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.607 Controller IO queue size 128, less than required. 00:27:41.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:41.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:41.607 Initialization complete. Launching workers. 00:27:41.607 ======================================================== 00:27:41.607 Latency(us) 00:27:41.607 Device Information : IOPS MiB/s Average min max 00:27:41.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1372.97 343.24 96396.20 56597.87 151369.11 00:27:41.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 565.37 141.34 230331.36 126248.16 349209.35 00:27:41.607 ======================================================== 00:27:41.607 Total : 1938.34 484.59 135462.06 56597.87 349209.35 00:27:41.607 00:27:41.607 09:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:41.607 No valid NVMe controllers or AIO or URING devices found 00:27:41.607 Initializing NVMe Controllers 00:27:41.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.607 Controller IO queue size 128, less than required. 00:27:41.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.607 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:41.607 Controller IO queue size 128, less than required. 00:27:41.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:41.607 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:41.607 WARNING: Some requested NVMe devices were skipped 00:27:41.607 09:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:44.884 Initializing NVMe Controllers 00:27:44.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.884 Controller IO queue size 128, less than required. 00:27:44.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.884 Controller IO queue size 128, less than required. 00:27:44.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:44.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:44.884 Initialization complete. Launching workers. 00:27:44.884 00:27:44.884 ==================== 00:27:44.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:44.884 TCP transport: 00:27:44.884 polls: 7933 00:27:44.884 idle_polls: 5518 00:27:44.884 sock_completions: 2415 00:27:44.884 nvme_completions: 4705 00:27:44.884 submitted_requests: 7048 00:27:44.884 queued_requests: 1 00:27:44.884 00:27:44.884 ==================== 00:27:44.884 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:44.884 TCP transport: 00:27:44.884 polls: 5474 00:27:44.884 idle_polls: 2873 00:27:44.884 sock_completions: 2601 00:27:44.884 nvme_completions: 5105 00:27:44.884 submitted_requests: 7688 00:27:44.884 queued_requests: 1 00:27:44.884 ======================================================== 00:27:44.884 Latency(us) 00:27:44.884 Device Information : IOPS MiB/s Average min max 00:27:44.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1173.49 293.37 112375.22 65276.23 186171.22 00:27:44.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1273.28 318.32 101559.64 60891.12 162148.37 00:27:44.884 ======================================================== 00:27:44.884 Total : 2446.76 611.69 106746.89 60891.12 186171.22 00:27:44.884 00:27:44.884 09:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:44.884 09:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.884 rmmod nvme_tcp 00:27:44.884 rmmod nvme_fabrics 00:27:44.884 rmmod nvme_keyring 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1612085 ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1612085 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1612085 ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1612085 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612085 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612085' 00:27:44.884 killing process with pid 1612085 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1612085 00:27:44.884 09:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1612085 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.782 09:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.685 00:27:48.685 real 0m24.800s 00:27:48.685 user 1m18.761s 00:27:48.685 sys 0m6.700s 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:48.685 ************************************ 00:27:48.685 END TEST nvmf_perf 00:27:48.685 ************************************ 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.685 ************************************ 00:27:48.685 START TEST nvmf_fio_host 00:27:48.685 ************************************ 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:48.685 * Looking for test storage... 00:27:48.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:48.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.685 --rc genhtml_branch_coverage=1 00:27:48.685 --rc genhtml_function_coverage=1 00:27:48.685 --rc genhtml_legend=1 00:27:48.685 --rc geninfo_all_blocks=1 00:27:48.685 --rc geninfo_unexecuted_blocks=1 00:27:48.685 00:27:48.685 ' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:48.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.685 --rc genhtml_branch_coverage=1 00:27:48.685 --rc genhtml_function_coverage=1 00:27:48.685 --rc genhtml_legend=1 00:27:48.685 --rc geninfo_all_blocks=1 00:27:48.685 --rc geninfo_unexecuted_blocks=1 00:27:48.685 00:27:48.685 ' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:48.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.685 --rc genhtml_branch_coverage=1 00:27:48.685 --rc genhtml_function_coverage=1 00:27:48.685 --rc genhtml_legend=1 00:27:48.685 --rc geninfo_all_blocks=1 00:27:48.685 --rc geninfo_unexecuted_blocks=1 00:27:48.685 00:27:48.685 ' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:48.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.685 --rc genhtml_branch_coverage=1 00:27:48.685 --rc genhtml_function_coverage=1 00:27:48.685 --rc genhtml_legend=1 00:27:48.685 --rc geninfo_all_blocks=1 00:27:48.685 --rc geninfo_unexecuted_blocks=1 00:27:48.685 00:27:48.685 ' 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.685 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:48.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.686 09:48:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:51.218 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.218 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:51.219 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:51.219 Found net devices under 0000:84:00.0: cvl_0_0 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:51.219 Found net devices under 0000:84:00.1: cvl_0_1 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.219 09:48:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.219 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.219 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.219 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:27:51.478 00:27:51.478 --- 10.0.0.2 ping statistics --- 00:27:51.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.478 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:51.478 00:27:51.478 --- 10.0.0.1 ping statistics --- 00:27:51.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.478 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1616405 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1616405 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1616405 ']' 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.478 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.479 [2024-10-07 09:48:46.187568] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:27:51.479 [2024-10-07 09:48:46.187708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.479 [2024-10-07 09:48:46.283038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.737 [2024-10-07 09:48:46.406651] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.737 [2024-10-07 09:48:46.406721] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.737 [2024-10-07 09:48:46.406738] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.737 [2024-10-07 09:48:46.406752] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.737 [2024-10-07 09:48:46.406763] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.737 [2024-10-07 09:48:46.408735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.737 [2024-10-07 09:48:46.408822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.737 [2024-10-07 09:48:46.408883] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.737 [2024-10-07 09:48:46.408885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.737 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.737 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:27:51.737 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:52.302 [2024-10-07 09:48:46.901685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.302 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:52.302 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.302 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.302 09:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:52.868 Malloc1 00:27:52.868 09:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.126 09:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:53.691 09:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.624 [2024-10-07 09:48:49.098103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.624 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:54.882 09:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.140 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:55.140 fio-3.35 00:27:55.140 Starting 1 thread 00:27:57.663 00:27:57.663 test: (groupid=0, jobs=1): err= 0: pid=1616897: Mon Oct 7 09:48:52 2024 00:27:57.663 read: IOPS=9031, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:27:57.663 slat (usec): min=2, max=185, avg= 3.18, stdev= 1.78 00:27:57.663 clat (usec): min=2508, max=13245, avg=7773.67, stdev=600.16 00:27:57.663 lat (usec): min=2537, max=13248, avg=7776.84, stdev=600.03 00:27:57.663 clat percentiles (usec): 00:27:57.663 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:27:57.663 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:27:57.663 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:27:57.663 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[12125], 99.95th=[12649], 00:27:57.663 | 99.99th=[13173] 00:27:57.663 bw ( KiB/s): min=35416, max=36688, per=99.90%, avg=36088.00, stdev=520.96, samples=4 00:27:57.663 iops : min= 8854, max= 9172, avg=9022.00, stdev=130.24, samples=4 00:27:57.663 write: IOPS=9048, BW=35.3MiB/s (37.1MB/s)(70.9MiB/2006msec); 0 zone resets 00:27:57.663 slat (usec): min=2, max=126, avg= 3.32, stdev= 1.09 00:27:57.663 clat (usec): min=1418, max=12361, avg=6356.67, stdev=508.67 00:27:57.663 lat (usec): min=1427, max=12364, avg=6359.99, stdev=508.60 00:27:57.663 clat percentiles (usec): 00:27:57.663 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:27:57.663 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:27:57.663 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:27:57.663 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[10421], 99.95th=[11207], 00:27:57.663 | 99.99th=[12256] 00:27:57.663 bw ( KiB/s): min=35960, max=36416, per=100.00%, avg=36196.00, stdev=192.72, samples=4 00:27:57.663 iops : min= 8990, max= 9104, avg=9049.00, stdev=48.18, samples=4 00:27:57.663 lat (msec) : 2=0.03%, 4=0.11%, 10=99.70%, 20=0.16% 00:27:57.663 cpu : usr=70.52%, sys=28.13%, ctx=49, majf=0, minf=32 00:27:57.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:57.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:57.663 issued rwts: total=18117,18152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:57.663 00:27:57.663 Run status group 0 (all jobs): 00:27:57.663 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2006-2006msec 00:27:57.663 WRITE: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.3MB), run=2006-2006msec 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:57.663 09:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:57.920 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:57.920 fio-3.35 00:27:57.920 Starting 1 thread 00:28:00.449 00:28:00.449 test: (groupid=0, jobs=1): err= 0: pid=1617229: Mon Oct 7 09:48:54 2024 00:28:00.449 read: IOPS=5775, BW=90.2MiB/s (94.6MB/s)(182MiB/2013msec) 00:28:00.449 slat (usec): min=3, max=283, avg= 7.06, stdev= 4.74 00:28:00.449 clat (usec): min=3017, max=28668, avg=13387.25, stdev=5108.42 00:28:00.449 lat (usec): min=3027, max=28678, avg=13394.31, stdev=5109.99 00:28:00.449 clat percentiles (usec): 00:28:00.449 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 9110], 00:28:00.449 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11731], 60.00th=[13173], 00:28:00.449 | 70.00th=[15533], 80.00th=[18220], 90.00th=[21365], 95.00th=[23987], 00:28:00.449 | 99.00th=[25560], 99.50th=[26084], 99.90th=[27657], 99.95th=[27919], 00:28:00.449 | 99.99th=[28443] 00:28:00.449 bw ( KiB/s): min=36128, max=67904, per=50.68%, avg=46840.00, stdev=14860.49, samples=4 00:28:00.449 iops : min= 2258, max= 4244, avg=2927.50, stdev=928.78, samples=4 00:28:00.449 write: IOPS=3443, BW=53.8MiB/s (56.4MB/s)(95.0MiB/1766msec); 0 zone resets 00:28:00.449 slat (usec): min=39, max=347, avg=60.04, stdev=20.98 00:28:00.449 clat (usec): min=4214, max=30349, avg=15491.92, stdev=4212.54 00:28:00.449 lat (usec): min=4262, max=30440, avg=15551.96, stdev=4224.78 00:28:00.449 clat percentiles (usec): 00:28:00.449 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11207], 20.00th=[12125], 00:28:00.449 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14091], 60.00th=[15008], 00:28:00.449 | 70.00th=[17171], 80.00th=[19530], 90.00th=[22152], 95.00th=[23987], 00:28:00.449 | 99.00th=[26346], 99.50th=[27657], 99.90th=[28443], 99.95th=[28967], 00:28:00.449 | 99.99th=[30278] 00:28:00.449 bw ( KiB/s): min=37088, max=71552, per=88.30%, avg=48648.00, stdev=16128.37, samples=4 00:28:00.449 iops : min= 2318, max= 4472, avg=3040.50, stdev=1008.02, samples=4 00:28:00.449 lat (msec) : 4=0.16%, 10=20.60%, 20=63.76%, 50=15.48% 00:28:00.449 cpu : usr=84.49%, sys=14.31%, ctx=7, majf=0, minf=57 00:28:00.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:00.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:00.449 issued rwts: total=11627,6081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:00.449 00:28:00.449 Run status group 0 (all jobs): 00:28:00.449 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=182MiB (190MB), run=2013-2013msec 00:28:00.449 WRITE: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=95.0MiB (99.6MB), run=1766-1766msec 00:28:00.449 09:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.707 rmmod nvme_tcp 00:28:00.707 rmmod nvme_fabrics 00:28:00.707 rmmod nvme_keyring 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1616405 ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1616405 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1616405 ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1616405 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616405 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616405' 00:28:00.707 killing process with pid 1616405 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1616405 00:28:00.707 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1616405 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.966 09:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.498 00:28:03.498 real 0m14.582s 00:28:03.498 user 0m44.659s 00:28:03.498 sys 0m4.498s 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.498 ************************************ 00:28:03.498 END TEST nvmf_fio_host 00:28:03.498 ************************************ 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.498 ************************************ 00:28:03.498 START TEST nvmf_failover 00:28:03.498 ************************************ 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.498 * Looking for test storage... 00:28:03.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:28:03.498 09:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.498 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:03.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.499 --rc genhtml_branch_coverage=1 00:28:03.499 --rc genhtml_function_coverage=1 00:28:03.499 --rc genhtml_legend=1 00:28:03.499 --rc geninfo_all_blocks=1 00:28:03.499 --rc geninfo_unexecuted_blocks=1 00:28:03.499 00:28:03.499 ' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:03.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.499 --rc genhtml_branch_coverage=1 00:28:03.499 --rc genhtml_function_coverage=1 00:28:03.499 --rc genhtml_legend=1 00:28:03.499 --rc geninfo_all_blocks=1 00:28:03.499 --rc geninfo_unexecuted_blocks=1 00:28:03.499 00:28:03.499 ' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:03.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.499 --rc genhtml_branch_coverage=1 00:28:03.499 --rc genhtml_function_coverage=1 00:28:03.499 --rc genhtml_legend=1 00:28:03.499 --rc geninfo_all_blocks=1 00:28:03.499 --rc geninfo_unexecuted_blocks=1 00:28:03.499 00:28:03.499 ' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:03.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.499 --rc genhtml_branch_coverage=1 00:28:03.499 --rc genhtml_function_coverage=1 00:28:03.499 --rc genhtml_legend=1 00:28:03.499 --rc geninfo_all_blocks=1 00:28:03.499 --rc geninfo_unexecuted_blocks=1 00:28:03.499 00:28:03.499 ' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.499 09:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:06.106 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:06.106 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:06.106 Found net devices under 0000:84:00.0: cvl_0_0 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:06.106 Found net devices under 0000:84:00.1: cvl_0_1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.106 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:28:06.107 00:28:06.107 --- 10.0.0.2 ping statistics --- 00:28:06.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.107 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:06.107 00:28:06.107 --- 10.0.0.1 ping statistics --- 00:28:06.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.107 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1619570 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1619570 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1619570 ']' 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.107 09:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:06.107 [2024-10-07 09:49:00.776655] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:28:06.107 [2024-10-07 09:49:00.776771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.107 [2024-10-07 09:49:00.887350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.365 [2024-10-07 09:49:01.055012] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.365 [2024-10-07 09:49:01.055070] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.365 [2024-10-07 09:49:01.055085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.365 [2024-10-07 09:49:01.055097] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.365 [2024-10-07 09:49:01.055108] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.365 [2024-10-07 09:49:01.056205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.365 [2024-10-07 09:49:01.056294] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.365 [2024-10-07 09:49:01.056298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.623 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:06.881 [2024-10-07 09:49:01.547798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.881 09:49:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:07.447 Malloc0 00:28:07.447 09:49:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.014 09:49:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.580 09:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.146 [2024-10-07 09:49:03.865584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.146 09:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:09.712 [2024-10-07 09:49:04.242703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.712 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:09.970 [2024-10-07 09:49:04.619897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1620111 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1620111 /var/tmp/bdevperf.sock 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1620111 ']' 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.970 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:10.229 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.229 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:10.229 09:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.795 NVMe0n1 00:28:10.795 09:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:11.729 00:28:11.729 09:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1620258 00:28:11.729 09:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:11.729 09:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:12.664 09:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.922 09:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:16.207 09:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:16.774 00:28:16.774 09:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:17.342 [2024-10-07 09:49:11.906388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 [2024-10-07 09:49:11.906539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc7b0 is same with the state(6) to be set 00:28:17.342 09:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:20.626 09:49:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.883 [2024-10-07 09:49:15.490595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.883 09:49:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:21.819 09:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:22.386 [2024-10-07 09:49:16.998783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c1ba0 is same with the state(6) to be set 00:28:22.386 09:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1620258 00:28:27.661 { 00:28:27.661 "results": [ 00:28:27.661 { 00:28:27.661 "job": "NVMe0n1", 00:28:27.661 "core_mask": "0x1", 00:28:27.661 "workload": "verify", 00:28:27.661 "status": "finished", 00:28:27.661 "verify_range": { 00:28:27.661 "start": 0, 00:28:27.661 "length": 16384 00:28:27.661 }, 00:28:27.661 "queue_depth": 128, 00:28:27.661 "io_size": 4096, 00:28:27.661 "runtime": 15.006492, 00:28:27.661 "iops": 8562.294239053337, 00:28:27.661 "mibps": 33.4464618713021, 00:28:27.661 "io_failed": 6901, 00:28:27.661 "io_timeout": 0, 00:28:27.661 "avg_latency_us": 14161.417711391177, 00:28:27.661 "min_latency_us": 543.0992592592593, 00:28:27.661 "max_latency_us": 19029.712592592594 00:28:27.661 } 00:28:27.661 ], 00:28:27.661 "core_count": 1 00:28:27.661 } 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1620111 ']' 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1620111' 00:28:27.661 killing process with pid 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1620111 00:28:27.661 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:27.661 [2024-10-07 09:49:04.694546] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:28:27.661 [2024-10-07 09:49:04.694652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620111 ] 00:28:27.661 [2024-10-07 09:49:04.760044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.661 [2024-10-07 09:49:04.871425] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.661 Running I/O for 15 seconds... 00:28:27.661 8636.00 IOPS, 33.73 MiB/s [2024-10-07 09:49:07.573818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.573908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.573938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.573955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.573972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.573987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.661 [2024-10-07 09:49:07.574404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.661 [2024-10-07 09:49:07.574683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.661 [2024-10-07 09:49:07.574696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.662 [2024-10-07 09:49:07.574888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.574986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.662 [2024-10-07 09:49:07.575851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.662 [2024-10-07 09:49:07.575866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.575898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.575915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.575930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.575944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.575959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.575973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.575988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.576983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.576998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.577012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.577027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.663 [2024-10-07 09:49:07.577041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.663 [2024-10-07 09:49:07.577055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:07.577719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.664 [2024-10-07 09:49:07.577767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.664 [2024-10-07 09:49:07.577780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:28:27.664 [2024-10-07 09:49:07.577793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577851] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x237b730 was disconnected and freed. reset controller. 00:28:27.664 [2024-10-07 09:49:07.577870] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:27.664 [2024-10-07 09:49:07.577911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.664 [2024-10-07 09:49:07.577931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.664 [2024-10-07 09:49:07.577961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.577975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.664 [2024-10-07 09:49:07.577988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.578002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.664 [2024-10-07 09:49:07.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:07.578027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.664 [2024-10-07 09:49:07.581348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.664 [2024-10-07 09:49:07.581387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2358e80 (9): Bad file descriptor 00:28:27.664 [2024-10-07 09:49:07.611619] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:27.664 8517.00 IOPS, 33.27 MiB/s 8562.33 IOPS, 33.45 MiB/s 8558.00 IOPS, 33.43 MiB/s 8550.20 IOPS, 33.40 MiB/s [2024-10-07 09:49:11.908026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:11.908070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:11.908097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:11.908127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:11.908145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:11.908161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:11.908177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:11.908191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.664 [2024-10-07 09:49:11.908206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.664 [2024-10-07 09:49:11.908221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.665 [2024-10-07 09:49:11.908489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.908977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.908991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.665 [2024-10-07 09:49:11.909190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.665 [2024-10-07 09:49:11.909206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.909986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.909999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.666 [2024-10-07 09:49:11.910386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.666 [2024-10-07 09:49:11.910404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.667 [2024-10-07 09:49:11.910866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.910946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118200 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.910961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.910979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.910991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118208 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118216 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118224 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118232 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118240 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118248 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118256 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118264 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118272 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118280 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118288 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118296 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.667 [2024-10-07 09:49:11.911580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.667 [2024-10-07 09:49:11.911592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118304 len:8 PRP1 0x0 PRP2 0x0 00:28:27.667 [2024-10-07 09:49:11.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.667 [2024-10-07 09:49:11.911617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118312 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118320 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118328 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118336 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118344 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118352 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118360 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.911964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.911977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.911988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.911999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118368 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118376 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118384 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118392 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118400 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118408 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117504 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117512 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117520 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117528 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117536 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117544 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.668 [2024-10-07 09:49:11.912572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.668 [2024-10-07 09:49:11.912583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117552 len:8 PRP1 0x0 PRP2 0x0 00:28:27.668 [2024-10-07 09:49:11.912596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912653] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23849d0 was disconnected and freed. reset controller. 00:28:27.668 [2024-10-07 09:49:11.912670] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:27.668 [2024-10-07 09:49:11.912705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.668 [2024-10-07 09:49:11.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.668 [2024-10-07 09:49:11.912753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.668 [2024-10-07 09:49:11.912784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.668 [2024-10-07 09:49:11.912812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.668 [2024-10-07 09:49:11.912825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.668 [2024-10-07 09:49:11.912875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2358e80 (9): Bad file descriptor 00:28:27.668 [2024-10-07 09:49:11.916136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.669 [2024-10-07 09:49:12.034943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:27.669 8381.50 IOPS, 32.74 MiB/s 8437.29 IOPS, 32.96 MiB/s 8469.12 IOPS, 33.08 MiB/s 8498.67 IOPS, 33.20 MiB/s 8514.00 IOPS, 33.26 MiB/s [2024-10-07 09:49:17.000295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.000341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.000383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.000973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.000987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.669 [2024-10-07 09:49:17.001292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.001321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.001349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.001377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.669 [2024-10-07 09:49:17.001413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.669 [2024-10-07 09:49:17.001428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.670 [2024-10-07 09:49:17.001441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.670 [2024-10-07 09:49:17.001470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.670 [2024-10-07 09:49:17.001498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.670 [2024-10-07 09:49:17.001525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.670 [2024-10-07 09:49:17.001553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.001989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.670 [2024-10-07 09:49:17.002434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.670 [2024-10-07 09:49:17.002447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.002971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.671 [2024-10-07 09:49:17.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108768 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108776 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108784 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.671 [2024-10-07 09:49:17.003604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.671 [2024-10-07 09:49:17.003615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.671 [2024-10-07 09:49:17.003625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:28:27.671 [2024-10-07 09:49:17.003637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108832 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108840 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108848 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108856 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108864 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108872 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.003951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.003965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.003976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.003987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108880 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108888 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108896 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108904 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108912 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107992 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108000 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108008 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108016 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108024 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108032 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108040 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108048 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108056 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108064 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.672 [2024-10-07 09:49:17.004758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108072 len:8 PRP1 0x0 PRP2 0x0 00:28:27.672 [2024-10-07 09:49:17.004770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.672 [2024-10-07 09:49:17.004782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.672 [2024-10-07 09:49:17.004793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.673 [2024-10-07 09:49:17.004806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108080 len:8 PRP1 0x0 PRP2 0x0 00:28:27.673 [2024-10-07 09:49:17.004819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.004832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.673 [2024-10-07 09:49:17.004843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.673 [2024-10-07 09:49:17.004854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108088 len:8 PRP1 0x0 PRP2 0x0 00:28:27.673 [2024-10-07 09:49:17.004872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.004886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.673 [2024-10-07 09:49:17.004919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.673 [2024-10-07 09:49:17.004931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108096 len:8 PRP1 0x0 PRP2 0x0 00:28:27.673 [2024-10-07 09:49:17.004944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.004958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:27.673 [2024-10-07 09:49:17.004968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:27.673 [2024-10-07 09:49:17.004980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108104 len:8 PRP1 0x0 PRP2 0x0 00:28:27.673 [2024-10-07 09:49:17.004992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.005050] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23868b0 was disconnected and freed. reset controller. 00:28:27.673 [2024-10-07 09:49:17.005068] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:27.673 [2024-10-07 09:49:17.005103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.673 [2024-10-07 09:49:17.005122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.005138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.673 [2024-10-07 09:49:17.005151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.005165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.673 [2024-10-07 09:49:17.005177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.005191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:27.673 [2024-10-07 09:49:17.005219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.673 [2024-10-07 09:49:17.005232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.673 [2024-10-07 09:49:17.008483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.673 [2024-10-07 09:49:17.008523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2358e80 (9): Bad file descriptor 00:28:27.673 [2024-10-07 09:49:17.037956] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:27.673 8526.18 IOPS, 33.31 MiB/s 8538.08 IOPS, 33.35 MiB/s 8548.00 IOPS, 33.39 MiB/s 8552.43 IOPS, 33.41 MiB/s 00:28:27.673 Latency(us) 00:28:27.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.673 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:27.673 Verification LBA range: start 0x0 length 0x4000 00:28:27.673 NVMe0n1 : 15.01 8562.29 33.45 459.87 0.00 14161.42 543.10 19029.71 00:28:27.673 =================================================================================================================== 00:28:27.673 Total : 8562.29 33.45 459.87 0.00 14161.42 543.10 19029.71 00:28:27.673 Received shutdown signal, test time was about 15.000000 seconds 00:28:27.673 00:28:27.673 Latency(us) 00:28:27.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.673 =================================================================================================================== 00:28:27.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1621965 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1621965 /var/tmp/bdevperf.sock 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1621965 ']' 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.673 09:49:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:27.673 09:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.673 09:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:27.673 09:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.931 [2024-10-07 09:49:22.512858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.931 09:49:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:28.496 [2024-10-07 09:49:23.102769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:28.496 09:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.062 NVMe0n1 00:28:29.062 09:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.628 00:28:29.628 09:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:30.193 00:28:30.193 09:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:30.193 09:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:31.127 09:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:31.385 09:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:34.665 09:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:34.665 09:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:34.665 09:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1622878 00:28:34.665 09:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:34.665 09:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1622878 00:28:36.039 { 00:28:36.039 "results": [ 00:28:36.039 { 00:28:36.039 "job": "NVMe0n1", 00:28:36.039 "core_mask": "0x1", 00:28:36.039 "workload": "verify", 00:28:36.039 "status": "finished", 00:28:36.040 "verify_range": { 00:28:36.040 "start": 0, 00:28:36.040 "length": 16384 00:28:36.040 }, 00:28:36.040 "queue_depth": 128, 00:28:36.040 "io_size": 4096, 00:28:36.040 "runtime": 1.007223, 00:28:36.040 "iops": 8489.679048234602, 00:28:36.040 "mibps": 33.162808782166415, 00:28:36.040 "io_failed": 0, 00:28:36.040 "io_timeout": 0, 00:28:36.040 "avg_latency_us": 15005.943496493805, 00:28:36.040 "min_latency_us": 2985.528888888889, 00:28:36.040 "max_latency_us": 13398.471111111112 00:28:36.040 } 00:28:36.040 ], 00:28:36.040 "core_count": 1 00:28:36.040 } 00:28:36.040 09:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:36.040 [2024-10-07 09:49:21.922997] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:28:36.040 [2024-10-07 09:49:21.923104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621965 ] 00:28:36.040 [2024-10-07 09:49:21.987558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.040 [2024-10-07 09:49:22.094612] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.040 [2024-10-07 09:49:25.991026] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:36.040 [2024-10-07 09:49:25.991104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.040 [2024-10-07 09:49:25.991126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.040 [2024-10-07 09:49:25.991143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.040 [2024-10-07 09:49:25.991166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.040 [2024-10-07 09:49:25.991194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.040 [2024-10-07 09:49:25.991208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.040 [2024-10-07 09:49:25.991221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.040 [2024-10-07 09:49:25.991233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.040 [2024-10-07 09:49:25.991246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:36.040 [2024-10-07 09:49:25.991288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.040 [2024-10-07 09:49:25.991318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1e80 (9): Bad file descriptor 00:28:36.040 [2024-10-07 09:49:26.001572] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:36.040 Running I/O for 1 seconds... 00:28:36.040 8423.00 IOPS, 32.90 MiB/s 00:28:36.040 Latency(us) 00:28:36.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.040 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:36.040 Verification LBA range: start 0x0 length 0x4000 00:28:36.040 NVMe0n1 : 1.01 8489.68 33.16 0.00 0.00 15005.94 2985.53 13398.47 00:28:36.040 =================================================================================================================== 00:28:36.040 Total : 8489.68 33.16 0.00 0.00 15005.94 2985.53 13398.47 00:28:36.040 09:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.040 09:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:36.617 09:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:36.950 09:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.950 09:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:37.226 09:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.791 09:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:41.071 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:41.071 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1621965 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1621965 ']' 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1621965 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621965 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621965' 00:28:41.329 killing process with pid 1621965 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1621965 00:28:41.329 09:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1621965 00:28:41.587 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:41.587 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.845 rmmod nvme_tcp 00:28:41.845 rmmod nvme_fabrics 00:28:41.845 rmmod nvme_keyring 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.845 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1619570 ']' 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1619570 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1619570 ']' 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1619570 00:28:41.846 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1619570 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1619570' 00:28:42.131 killing process with pid 1619570 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1619570 00:28:42.131 09:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1619570 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.390 09:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.922 00:28:44.922 real 0m41.305s 00:28:44.922 user 2m29.023s 00:28:44.922 sys 0m7.478s 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:44.922 ************************************ 00:28:44.922 END TEST nvmf_failover 00:28:44.922 ************************************ 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.922 ************************************ 00:28:44.922 START TEST nvmf_host_discovery 00:28:44.922 ************************************ 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:44.922 * Looking for test storage... 00:28:44.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:44.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.922 --rc genhtml_branch_coverage=1 00:28:44.922 --rc genhtml_function_coverage=1 00:28:44.922 --rc genhtml_legend=1 00:28:44.922 --rc geninfo_all_blocks=1 00:28:44.922 --rc geninfo_unexecuted_blocks=1 00:28:44.922 00:28:44.922 ' 00:28:44.922 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:44.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.922 --rc genhtml_branch_coverage=1 00:28:44.922 --rc genhtml_function_coverage=1 00:28:44.922 --rc genhtml_legend=1 00:28:44.922 --rc geninfo_all_blocks=1 00:28:44.922 --rc geninfo_unexecuted_blocks=1 00:28:44.922 00:28:44.922 ' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.923 --rc genhtml_branch_coverage=1 00:28:44.923 --rc genhtml_function_coverage=1 00:28:44.923 --rc genhtml_legend=1 00:28:44.923 --rc geninfo_all_blocks=1 00:28:44.923 --rc geninfo_unexecuted_blocks=1 00:28:44.923 00:28:44.923 ' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.923 --rc genhtml_branch_coverage=1 00:28:44.923 --rc genhtml_function_coverage=1 00:28:44.923 --rc genhtml_legend=1 00:28:44.923 --rc geninfo_all_blocks=1 00:28:44.923 --rc geninfo_unexecuted_blocks=1 00:28:44.923 00:28:44.923 ' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.923 09:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:47.458 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:47.458 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:47.458 Found net devices under 0000:84:00.0: cvl_0_0 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:47.458 Found net devices under 0000:84:00.1: cvl_0_1 00:28:47.458 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:47.459 00:28:47.459 --- 10.0.0.2 ping statistics --- 00:28:47.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.459 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:47.459 00:28:47.459 --- 10.0.0.1 ping statistics --- 00:28:47.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.459 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1625715 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1625715 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1625715 ']' 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.459 09:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 [2024-10-07 09:49:42.343522] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:28:47.718 [2024-10-07 09:49:42.343713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.718 [2024-10-07 09:49:42.457340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.977 [2024-10-07 09:49:42.607706] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.977 [2024-10-07 09:49:42.607824] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.977 [2024-10-07 09:49:42.607860] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.977 [2024-10-07 09:49:42.607920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.977 [2024-10-07 09:49:42.607976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.977 [2024-10-07 09:49:42.609042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.912 [2024-10-07 09:49:43.711125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.912 [2024-10-07 09:49:43.719451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.912 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.171 null0 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.171 null1 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1625913 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1625913 /tmp/host.sock 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1625913 ']' 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:49.171 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.171 09:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.171 [2024-10-07 09:49:43.830110] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:28:49.171 [2024-10-07 09:49:43.830187] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625913 ] 00:28:49.171 [2024-10-07 09:49:43.914546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.429 [2024-10-07 09:49:44.036117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:49.688 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.946 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:49.947 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.204 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:50.204 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 [2024-10-07 09:49:44.766278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:28:50.205 09:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:28:50.771 [2024-10-07 09:49:45.451051] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:50.771 [2024-10-07 09:49:45.451080] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:50.771 [2024-10-07 09:49:45.451104] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:50.771 [2024-10-07 09:49:45.537413] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:51.029 [2024-10-07 09:49:45.641391] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:51.029 [2024-10-07 09:49:45.641418] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:51.286 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.286 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:51.286 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:51.286 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:51.286 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:51.287 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.287 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.287 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:51.287 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:51.287 09:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:51.287 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.545 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 [2024-10-07 09:49:46.391016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:51.804 [2024-10-07 09:49:46.391717] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:51.804 [2024-10-07 09:49:46.391755] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.804 [2024-10-07 09:49:46.519586] bdev_nvme.c:7088:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:51.804 09:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:28:51.804 [2024-10-07 09:49:46.584520] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:51.804 [2024-10-07 09:49:46.584545] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:51.804 [2024-10-07 09:49:46.584556] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:52.738 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 [2024-10-07 09:49:47.623705] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:52.997 [2024-10-07 09:49:47.623742] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:52.997 [2024-10-07 09:49:47.630772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.997 [2024-10-07 09:49:47.630801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.997 [2024-10-07 09:49:47.630817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.997 [2024-10-07 09:49:47.630830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.997 [2024-10-07 09:49:47.630845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.997 [2024-10-07 09:49:47.630885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.997 [2024-10-07 09:49:47.630910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.997 [2024-10-07 09:49:47.630924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.997 [2024-10-07 09:49:47.630953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:52.997 [2024-10-07 09:49:47.640779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.997 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.997 [2024-10-07 09:49:47.650821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.997 [2024-10-07 09:49:47.651054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.997 [2024-10-07 09:49:47.651085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.997 [2024-10-07 09:49:47.651103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.997 [2024-10-07 09:49:47.651127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.997 [2024-10-07 09:49:47.651149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.997 [2024-10-07 09:49:47.651164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.997 [2024-10-07 09:49:47.651195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.997 [2024-10-07 09:49:47.651215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.997 [2024-10-07 09:49:47.660906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.997 [2024-10-07 09:49:47.661078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.997 [2024-10-07 09:49:47.661107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.997 [2024-10-07 09:49:47.661125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.997 [2024-10-07 09:49:47.661147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.997 [2024-10-07 09:49:47.661167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.997 [2024-10-07 09:49:47.661198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.997 [2024-10-07 09:49:47.661211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.997 [2024-10-07 09:49:47.661231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:52.998 [2024-10-07 09:49:47.670981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.998 [2024-10-07 09:49:47.671130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.998 [2024-10-07 09:49:47.671160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.998 [2024-10-07 09:49:47.671192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.998 [2024-10-07 09:49:47.671215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.998 [2024-10-07 09:49:47.671239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.998 [2024-10-07 09:49:47.671254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.998 [2024-10-07 09:49:47.671268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.998 [2024-10-07 09:49:47.671286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:52.998 [2024-10-07 09:49:47.681061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.998 [2024-10-07 09:49:47.681275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.998 [2024-10-07 09:49:47.681303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.998 [2024-10-07 09:49:47.681320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.998 [2024-10-07 09:49:47.681341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.998 [2024-10-07 09:49:47.681361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.998 [2024-10-07 09:49:47.681375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.998 [2024-10-07 09:49:47.681387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.998 [2024-10-07 09:49:47.681406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 [2024-10-07 09:49:47.691142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.998 [2024-10-07 09:49:47.691334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.998 [2024-10-07 09:49:47.691361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.998 [2024-10-07 09:49:47.691377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.998 [2024-10-07 09:49:47.691399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.998 [2024-10-07 09:49:47.691419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.998 [2024-10-07 09:49:47.691433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.998 [2024-10-07 09:49:47.691446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.998 [2024-10-07 09:49:47.691464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.998 [2024-10-07 09:49:47.701215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.998 [2024-10-07 09:49:47.701414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.998 [2024-10-07 09:49:47.701441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.998 [2024-10-07 09:49:47.701457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.998 [2024-10-07 09:49:47.701478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.998 [2024-10-07 09:49:47.701497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.998 [2024-10-07 09:49:47.701511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.998 [2024-10-07 09:49:47.701523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.998 [2024-10-07 09:49:47.701553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:52.998 [2024-10-07 09:49:47.711288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.998 [2024-10-07 09:49:47.711452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.998 [2024-10-07 09:49:47.711504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690e90 with addr=10.0.0.2, port=4420 00:28:52.998 [2024-10-07 09:49:47.711525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690e90 is same with the state(6) to be set 00:28:52.998 [2024-10-07 09:49:47.711550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690e90 (9): Bad file descriptor 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:52.998 [2024-10-07 09:49:47.711602] bdev_nvme.c:6951:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:52.998 [2024-10-07 09:49:47.711632] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:52.998 [2024-10-07 09:49:47.711670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:52.998 [2024-10-07 09:49:47.711694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:52.998 [2024-10-07 09:49:47.711711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:52.998 [2024-10-07 09:49:47.711737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:52.998 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:52.999 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:53.256 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.257 09:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.188 [2024-10-07 09:49:48.978630] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:54.188 [2024-10-07 09:49:48.978656] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:54.188 [2024-10-07 09:49:48.978681] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:54.446 [2024-10-07 09:49:49.065956] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:54.704 [2024-10-07 09:49:49.336423] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:54.704 [2024-10-07 09:49:49.336463] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.704 request: 00:28:54.704 { 00:28:54.704 "name": "nvme", 00:28:54.704 "trtype": "tcp", 00:28:54.704 "traddr": "10.0.0.2", 00:28:54.704 "adrfam": "ipv4", 00:28:54.704 "trsvcid": "8009", 00:28:54.704 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:54.704 "wait_for_attach": true, 00:28:54.704 "method": "bdev_nvme_start_discovery", 00:28:54.704 "req_id": 1 00:28:54.704 } 00:28:54.704 Got JSON-RPC error response 00:28:54.704 response: 00:28:54.704 { 00:28:54.704 "code": -17, 00:28:54.704 "message": "File exists" 00:28:54.704 } 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:54.704 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.705 request: 00:28:54.705 { 00:28:54.705 "name": "nvme_second", 00:28:54.705 "trtype": "tcp", 00:28:54.705 "traddr": "10.0.0.2", 00:28:54.705 "adrfam": "ipv4", 00:28:54.705 "trsvcid": "8009", 00:28:54.705 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:54.705 "wait_for_attach": true, 00:28:54.705 "method": "bdev_nvme_start_discovery", 00:28:54.705 "req_id": 1 00:28:54.705 } 00:28:54.705 Got JSON-RPC error response 00:28:54.705 response: 00:28:54.705 { 00:28:54.705 "code": -17, 00:28:54.705 "message": "File exists" 00:28:54.705 } 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:54.705 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.962 09:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.025 [2024-10-07 09:49:50.600046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-10-07 09:49:50.600094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7730 with addr=10.0.0.2, port=8010 00:28:56.025 [2024-10-07 09:49:50.600125] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:56.025 [2024-10-07 09:49:50.600138] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:56.025 [2024-10-07 09:49:50.600150] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:56.975 [2024-10-07 09:49:51.602468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.975 [2024-10-07 09:49:51.602517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7730 with addr=10.0.0.2, port=8010 00:28:56.975 [2024-10-07 09:49:51.602546] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:56.975 [2024-10-07 09:49:51.602562] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:56.975 [2024-10-07 09:49:51.602577] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:57.908 [2024-10-07 09:49:52.604663] bdev_nvme.c:7207:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:57.908 request: 00:28:57.908 { 00:28:57.908 "name": "nvme_second", 00:28:57.908 "trtype": "tcp", 00:28:57.908 "traddr": "10.0.0.2", 00:28:57.908 "adrfam": "ipv4", 00:28:57.908 "trsvcid": "8010", 00:28:57.908 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:57.908 "wait_for_attach": false, 00:28:57.908 "attach_timeout_ms": 3000, 00:28:57.908 "method": "bdev_nvme_start_discovery", 00:28:57.908 "req_id": 1 00:28:57.908 } 00:28:57.908 Got JSON-RPC error response 00:28:57.908 response: 00:28:57.908 { 00:28:57.908 "code": -110, 00:28:57.909 "message": "Connection timed out" 00:28:57.909 } 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1625913 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.909 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.909 rmmod nvme_tcp 00:28:57.909 rmmod nvme_fabrics 00:28:57.909 rmmod nvme_keyring 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1625715 ']' 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1625715 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1625715 ']' 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1625715 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625715 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625715' 00:28:58.167 killing process with pid 1625715 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1625715 00:28:58.167 09:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1625715 00:28:58.426 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.427 09:49:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.961 00:29:00.961 real 0m16.025s 00:29:00.961 user 0m23.018s 00:29:00.961 sys 0m3.750s 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.961 ************************************ 00:29:00.961 END TEST nvmf_host_discovery 00:29:00.961 ************************************ 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.961 ************************************ 00:29:00.961 START TEST nvmf_host_multipath_status 00:29:00.961 ************************************ 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:00.961 * Looking for test storage... 00:29:00.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.961 --rc genhtml_branch_coverage=1 00:29:00.961 --rc genhtml_function_coverage=1 00:29:00.961 --rc genhtml_legend=1 00:29:00.961 --rc geninfo_all_blocks=1 00:29:00.961 --rc geninfo_unexecuted_blocks=1 00:29:00.961 00:29:00.961 ' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.961 --rc genhtml_branch_coverage=1 00:29:00.961 --rc genhtml_function_coverage=1 00:29:00.961 --rc genhtml_legend=1 00:29:00.961 --rc geninfo_all_blocks=1 00:29:00.961 --rc geninfo_unexecuted_blocks=1 00:29:00.961 00:29:00.961 ' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.961 --rc genhtml_branch_coverage=1 00:29:00.961 --rc genhtml_function_coverage=1 00:29:00.961 --rc genhtml_legend=1 00:29:00.961 --rc geninfo_all_blocks=1 00:29:00.961 --rc geninfo_unexecuted_blocks=1 00:29:00.961 00:29:00.961 ' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.961 --rc genhtml_branch_coverage=1 00:29:00.961 --rc genhtml_function_coverage=1 00:29:00.961 --rc genhtml_legend=1 00:29:00.961 --rc geninfo_all_blocks=1 00:29:00.961 --rc geninfo_unexecuted_blocks=1 00:29:00.961 00:29:00.961 ' 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.961 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.962 09:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:03.495 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:03.495 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:03.495 Found net devices under 0000:84:00.0: cvl_0_0 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:03.495 Found net devices under 0000:84:00.1: cvl_0_1 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:03.495 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:29:03.496 00:29:03.496 --- 10.0.0.2 ping statistics --- 00:29:03.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.496 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:29:03.496 00:29:03.496 --- 10.0.0.1 ping statistics --- 00:29:03.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.496 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1629091 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1629091 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1629091 ']' 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:03.496 09:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:03.496 [2024-10-07 09:49:58.042482] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:29:03.496 [2024-10-07 09:49:58.042625] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.496 [2024-10-07 09:49:58.138287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:03.496 [2024-10-07 09:49:58.261183] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.496 [2024-10-07 09:49:58.261251] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.496 [2024-10-07 09:49:58.261268] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.496 [2024-10-07 09:49:58.261282] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.496 [2024-10-07 09:49:58.261294] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.496 [2024-10-07 09:49:58.262221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.496 [2024-10-07 09:49:58.262228] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1629091 00:29:03.753 09:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:04.317 [2024-10-07 09:49:59.010287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.318 09:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:04.883 Malloc0 00:29:04.883 09:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:05.450 09:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.015 09:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.273 [2024-10-07 09:50:00.995657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.273 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:06.530 [2024-10-07 09:50:01.324716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1629508 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1629508 /var/tmp/bdevperf.sock 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1629508 ']' 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.788 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:07.353 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.353 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:07.353 09:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:07.917 09:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:08.489 Nvme0n1 00:29:08.489 09:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:09.423 Nvme0n1 00:29:09.423 09:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:09.423 09:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:11.325 09:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:11.325 09:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:11.892 09:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:12.151 09:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:13.085 09:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:13.085 09:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:13.085 09:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.085 09:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:13.652 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:13.652 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:13.652 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.652 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:13.910 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:13.910 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:13.910 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.910 09:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:14.519 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:14.519 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:14.519 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.519 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:15.085 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.085 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:15.085 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.085 09:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:15.650 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.650 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:15.650 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.650 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:15.909 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.909 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:15.909 09:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:16.475 09:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:17.041 09:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:17.975 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:17.975 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:17.975 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:17.975 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:18.233 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:18.233 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:18.233 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:18.233 09:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:18.799 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:18.799 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:18.799 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:18.799 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:19.057 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.057 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:19.057 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.057 09:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:19.624 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.624 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:19.624 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.624 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:19.882 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.882 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:19.882 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.882 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:20.141 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:20.141 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:20.141 09:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:20.707 09:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:20.965 09:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:22.339 09:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:22.339 09:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:22.339 09:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.339 09:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:22.598 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.598 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:22.598 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:22.598 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.164 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:23.164 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:23.164 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.164 09:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:23.733 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.733 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:23.733 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.733 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:24.300 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.300 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:24.300 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:24.300 09:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.867 09:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.867 09:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:24.867 09:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.867 09:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:25.433 09:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.433 09:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:25.433 09:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:25.999 09:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:26.257 09:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:27.191 09:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:27.191 09:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:27.191 09:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.191 09:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:27.756 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.756 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:27.756 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.756 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:28.014 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:28.014 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:28.014 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.014 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:28.271 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.271 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:28.271 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.271 09:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:28.530 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.530 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:28.530 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.530 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:28.788 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.788 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:28.788 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.788 09:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:29.354 09:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:29.354 09:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:29.354 09:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:29.920 09:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:30.486 09:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:31.418 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:31.418 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:31.418 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.418 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:31.675 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:31.675 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:31.675 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.675 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:32.241 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:32.241 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:32.241 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.241 09:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:32.499 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:32.499 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:32.499 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.499 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:32.756 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:32.756 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:32.756 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.756 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:33.321 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.321 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:33.321 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.321 09:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:33.886 09:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.886 09:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:33.886 09:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:34.450 09:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:34.707 09:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:35.641 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:35.641 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:35.641 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.641 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:36.207 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:36.207 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:36.207 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.207 09:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:36.773 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.773 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:36.773 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.773 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:37.031 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.031 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:37.031 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.031 09:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:37.597 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.597 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:37.597 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.598 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:38.163 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:38.163 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:38.163 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.163 09:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:38.729 09:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.729 09:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:38.987 09:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:38.987 09:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:39.245 09:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:39.810 09:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:40.761 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:40.762 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:40.762 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.762 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:41.327 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.327 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:41.327 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.327 09:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:41.963 09:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.963 09:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:41.963 09:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.963 09:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:42.254 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.254 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:42.254 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.254 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:42.820 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.820 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:42.820 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.820 09:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:43.386 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.386 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:43.386 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.386 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:43.952 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.952 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:43.952 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:44.210 09:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:44.776 09:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:45.710 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:45.710 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:45.710 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:45.710 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:46.274 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:46.274 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:46.274 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.274 09:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:46.841 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.841 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:46.841 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.841 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:47.099 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:47.099 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:47.099 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:47.099 09:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:47.664 09:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:47.664 09:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:47.664 09:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:47.664 09:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:48.231 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:48.231 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:48.231 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:48.231 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:48.797 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:48.797 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:48.797 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:49.054 09:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:49.620 09:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:50.554 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:50.554 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:50.554 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.554 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:50.812 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.812 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:50.812 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.812 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:51.399 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:51.399 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:51.399 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:51.399 09:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:51.657 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:51.657 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:51.657 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:51.657 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:52.223 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:52.223 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:52.223 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.223 09:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:52.481 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:52.481 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:52.481 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.481 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:53.046 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:53.046 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:53.046 09:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:53.611 09:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:53.868 09:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:54.800 09:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:54.800 09:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:54.800 09:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.800 09:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:55.365 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.365 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:55.365 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.365 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:55.931 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:55.931 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:55.931 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.931 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:56.188 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:56.189 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:56.189 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:56.189 09:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:56.754 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:56.754 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:56.754 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:56.754 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:57.321 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:57.321 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:57.321 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.321 09:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1629508 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1629508 ']' 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1629508 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1629508 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1629508' 00:29:57.887 killing process with pid 1629508 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1629508 00:29:57.887 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1629508 00:29:57.887 { 00:29:57.887 "results": [ 00:29:57.887 { 00:29:57.887 "job": "Nvme0n1", 00:29:57.887 "core_mask": "0x4", 00:29:57.887 "workload": "verify", 00:29:57.887 "status": "terminated", 00:29:57.887 "verify_range": { 00:29:57.887 "start": 0, 00:29:57.887 "length": 16384 00:29:57.887 }, 00:29:57.887 "queue_depth": 128, 00:29:57.887 "io_size": 4096, 00:29:57.887 "runtime": 48.218138, 00:29:57.887 "iops": 8487.760352753563, 00:29:57.887 "mibps": 33.155313877943605, 00:29:57.887 "io_failed": 0, 00:29:57.887 "io_timeout": 0, 00:29:57.887 "avg_latency_us": 15055.280056643687, 00:29:57.887 "min_latency_us": 928.4266666666666, 00:29:57.887 "max_latency_us": 5020737.232592593 00:29:57.887 } 00:29:57.887 ], 00:29:57.887 "core_count": 1 00:29:57.887 } 00:29:58.148 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1629508 00:29:58.148 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:58.148 [2024-10-07 09:50:01.405396] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:29:58.148 [2024-10-07 09:50:01.405513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629508 ] 00:29:58.148 [2024-10-07 09:50:01.476617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.148 [2024-10-07 09:50:01.589633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.148 [2024-10-07 09:50:03.821904] bdev_nvme.c:5607:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:29:58.148 Running I/O for 90 seconds... 00:29:58.148 8749.00 IOPS, 34.18 MiB/s 8835.00 IOPS, 34.51 MiB/s 8813.00 IOPS, 34.43 MiB/s 8814.75 IOPS, 34.43 MiB/s 8857.40 IOPS, 34.60 MiB/s 8871.17 IOPS, 34.65 MiB/s 8862.14 IOPS, 34.62 MiB/s 8891.50 IOPS, 34.73 MiB/s 8905.56 IOPS, 34.79 MiB/s 8921.60 IOPS, 34.85 MiB/s 8939.91 IOPS, 34.92 MiB/s 8955.50 IOPS, 34.98 MiB/s 8941.69 IOPS, 34.93 MiB/s 8947.93 IOPS, 34.95 MiB/s 8935.07 IOPS, 34.90 MiB/s 8940.88 IOPS, 34.93 MiB/s 8915.65 IOPS, 34.83 MiB/s 8918.72 IOPS, 34.84 MiB/s 8929.05 IOPS, 34.88 MiB/s 8929.65 IOPS, 34.88 MiB/s [2024-10-07 09:50:24.474366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.474972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.474988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.475488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.475504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:58.149 [2024-10-07 09:50:24.476560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.149 [2024-10-07 09:50:24.476575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.476978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.476995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.477973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.477989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.478030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.478073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.478120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.150 [2024-10-07 09:50:24.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.150 [2024-10-07 09:50:24.478205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.150 [2024-10-07 09:50:24.478245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:58.150 [2024-10-07 09:50:24.478270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.150 [2024-10-07 09:50:24.478286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.478959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.478986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:58.151 [2024-10-07 09:50:24.479704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.151 [2024-10-07 09:50:24.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.479748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.479764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.479791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.479807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.479835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.479858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.479887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.479930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.479962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.479979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:24.480617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:24.480816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:24.480831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:58.152 8608.90 IOPS, 33.63 MiB/s 8217.59 IOPS, 32.10 MiB/s 7860.30 IOPS, 30.70 MiB/s 7532.79 IOPS, 29.42 MiB/s 7231.48 IOPS, 28.25 MiB/s 7206.23 IOPS, 28.15 MiB/s 7268.00 IOPS, 28.39 MiB/s 7335.36 IOPS, 28.65 MiB/s 7388.86 IOPS, 28.86 MiB/s 7458.57 IOPS, 29.14 MiB/s 7587.23 IOPS, 29.64 MiB/s 7709.59 IOPS, 30.12 MiB/s 7820.79 IOPS, 30.55 MiB/s 7933.47 IOPS, 30.99 MiB/s 8013.11 IOPS, 31.30 MiB/s 8035.50 IOPS, 31.39 MiB/s 8057.65 IOPS, 31.48 MiB/s 8076.95 IOPS, 31.55 MiB/s 8100.74 IOPS, 31.64 MiB/s 8126.45 IOPS, 31.74 MiB/s 8206.68 IOPS, 32.06 MiB/s 8288.19 IOPS, 32.38 MiB/s 8366.14 IOPS, 32.68 MiB/s 8432.89 IOPS, 32.94 MiB/s [2024-10-07 09:50:48.558447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:48.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:48.558614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.152 [2024-10-07 09:50:48.558652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.558854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.558870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.559047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.559070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.559096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.152 [2024-10-07 09:50:48.559113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:58.152 [2024-10-07 09:50:48.559135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.559973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.559988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.153 [2024-10-07 09:50:48.560297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:58.153 [2024-10-07 09:50:48.560318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.560775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.154 [2024-10-07 09:50:48.560789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.561882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.154 [2024-10-07 09:50:48.561928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.561956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.154 [2024-10-07 09:50:48.561974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.561995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.154 [2024-10-07 09:50:48.562010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:58.154 [2024-10-07 09:50:48.562031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.154 [2024-10-07 09:50:48.562046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:58.154 8461.20 IOPS, 33.05 MiB/s 8473.07 IOPS, 33.10 MiB/s 8485.77 IOPS, 33.15 MiB/s 8489.58 IOPS, 33.16 MiB/s Received shutdown signal, test time was about 48.218916 seconds 00:29:58.154 00:29:58.154 Latency(us) 00:29:58.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.154 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:58.154 Verification LBA range: start 0x0 length 0x4000 00:29:58.154 Nvme0n1 : 48.22 8487.76 33.16 0.00 0.00 15055.28 928.43 5020737.23 00:29:58.154 =================================================================================================================== 00:29:58.154 Total : 8487.76 33.16 0.00 0.00 15055.28 928.43 5020737.23 00:29:58.154 [2024-10-07 09:50:52.502275] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:29:58.154 09:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.720 rmmod nvme_tcp 00:29:58.720 rmmod nvme_fabrics 00:29:58.720 rmmod nvme_keyring 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1629091 ']' 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1629091 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1629091 ']' 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1629091 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.720 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1629091 00:29:58.978 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.978 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.978 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1629091' 00:29:58.978 killing process with pid 1629091 00:29:58.978 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1629091 00:29:58.978 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1629091 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.237 09:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.143 00:30:01.143 real 1m0.628s 00:30:01.143 user 3m12.337s 00:30:01.143 sys 0m16.212s 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:01.143 ************************************ 00:30:01.143 END TEST nvmf_host_multipath_status 00:30:01.143 ************************************ 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.143 ************************************ 00:30:01.143 START TEST nvmf_discovery_remove_ifc 00:30:01.143 ************************************ 00:30:01.143 09:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:01.402 * Looking for test storage... 00:30:01.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:01.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.402 --rc genhtml_branch_coverage=1 00:30:01.402 --rc genhtml_function_coverage=1 00:30:01.402 --rc genhtml_legend=1 00:30:01.402 --rc geninfo_all_blocks=1 00:30:01.402 --rc geninfo_unexecuted_blocks=1 00:30:01.402 00:30:01.402 ' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:01.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.402 --rc genhtml_branch_coverage=1 00:30:01.402 --rc genhtml_function_coverage=1 00:30:01.402 --rc genhtml_legend=1 00:30:01.402 --rc geninfo_all_blocks=1 00:30:01.402 --rc geninfo_unexecuted_blocks=1 00:30:01.402 00:30:01.402 ' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:01.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.402 --rc genhtml_branch_coverage=1 00:30:01.402 --rc genhtml_function_coverage=1 00:30:01.402 --rc genhtml_legend=1 00:30:01.402 --rc geninfo_all_blocks=1 00:30:01.402 --rc geninfo_unexecuted_blocks=1 00:30:01.402 00:30:01.402 ' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:01.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.402 --rc genhtml_branch_coverage=1 00:30:01.402 --rc genhtml_function_coverage=1 00:30:01.402 --rc genhtml_legend=1 00:30:01.402 --rc geninfo_all_blocks=1 00:30:01.402 --rc geninfo_unexecuted_blocks=1 00:30:01.402 00:30:01.402 ' 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.402 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.403 09:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:03.931 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:03.931 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.931 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:03.932 Found net devices under 0000:84:00.0: cvl_0_0 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:03.932 Found net devices under 0000:84:00.1: cvl_0_1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:30:03.932 00:30:03.932 --- 10.0.0.2 ping statistics --- 00:30:03.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.932 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:03.932 00:30:03.932 --- 10.0.0.1 ping statistics --- 00:30:03.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.932 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1637571 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1637571 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1637571 ']' 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:03.932 09:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.932 [2024-10-07 09:50:58.712601] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:30:03.932 [2024-10-07 09:50:58.712696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.191 [2024-10-07 09:50:58.805736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.191 [2024-10-07 09:50:58.960039] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.191 [2024-10-07 09:50:58.960106] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.191 [2024-10-07 09:50:58.960123] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.191 [2024-10-07 09:50:58.960145] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.191 [2024-10-07 09:50:58.960180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.191 [2024-10-07 09:50:58.961151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 [2024-10-07 09:50:59.167524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.451 [2024-10-07 09:50:59.175886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:04.451 null0 00:30:04.451 [2024-10-07 09:50:59.208075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1637598 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1637598 /tmp/host.sock 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1637598 ']' 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:04.451 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:04.451 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.710 [2024-10-07 09:50:59.316285] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:30:04.710 [2024-10-07 09:50:59.316446] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637598 ] 00:30:04.710 [2024-10-07 09:50:59.406771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.968 [2024-10-07 09:50:59.534302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.226 09:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:06.599 [2024-10-07 09:51:00.987036] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:06.599 [2024-10-07 09:51:00.987071] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:06.599 [2024-10-07 09:51:00.987093] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:06.599 [2024-10-07 09:51:01.073410] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:06.599 [2024-10-07 09:51:01.298807] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:06.599 [2024-10-07 09:51:01.298881] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:06.599 [2024-10-07 09:51:01.298950] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:06.599 [2024-10-07 09:51:01.298976] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:06.599 [2024-10-07 09:51:01.299007] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:06.599 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.600 [2024-10-07 09:51:01.304926] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xff0850 was disconnected and freed. delete nvme_qpair. 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:06.600 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.858 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.858 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:06.858 09:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:07.829 09:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:08.785 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.042 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:09.042 09:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:09.975 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:09.975 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.975 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:09.976 09:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:10.908 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.165 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:11.165 09:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:12.096 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:12.096 [2024-10-07 09:51:06.739956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:12.096 [2024-10-07 09:51:06.740034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.097 [2024-10-07 09:51:06.740056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.097 [2024-10-07 09:51:06.740075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.097 [2024-10-07 09:51:06.740089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.097 [2024-10-07 09:51:06.740103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.097 [2024-10-07 09:51:06.740117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.097 [2024-10-07 09:51:06.740131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.097 [2024-10-07 09:51:06.740144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.097 [2024-10-07 09:51:06.740159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.097 [2024-10-07 09:51:06.740188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.097 [2024-10-07 09:51:06.740210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcd190 is same with the state(6) to be set 00:30:12.097 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.097 [2024-10-07 09:51:06.749984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcd190 (9): Bad file descriptor 00:30:12.097 [2024-10-07 09:51:06.760039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:12.097 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:12.097 09:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:13.030 [2024-10-07 09:51:07.792950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:13.030 [2024-10-07 09:51:07.793013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcd190 with addr=10.0.0.2, port=4420 00:30:13.030 [2024-10-07 09:51:07.793038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcd190 is same with the state(6) to be set 00:30:13.030 [2024-10-07 09:51:07.793083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcd190 (9): Bad file descriptor 00:30:13.030 [2024-10-07 09:51:07.793158] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:13.030 [2024-10-07 09:51:07.793213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:13.030 [2024-10-07 09:51:07.793230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:13.030 [2024-10-07 09:51:07.793262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:13.030 [2024-10-07 09:51:07.793292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.030 [2024-10-07 09:51:07.793309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.030 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:13.030 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.030 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:13.030 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.031 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:13.031 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:13.031 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:13.031 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.288 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:13.288 09:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:14.220 [2024-10-07 09:51:08.795804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.220 [2024-10-07 09:51:08.795837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.220 [2024-10-07 09:51:08.795854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.220 [2024-10-07 09:51:08.795870] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:14.220 [2024-10-07 09:51:08.795927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.220 [2024-10-07 09:51:08.795964] bdev_nvme.c:6915:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:14.220 [2024-10-07 09:51:08.796007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.220 [2024-10-07 09:51:08.796029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.220 [2024-10-07 09:51:08.796047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.220 [2024-10-07 09:51:08.796061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.220 [2024-10-07 09:51:08.796074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.220 [2024-10-07 09:51:08.796087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.220 [2024-10-07 09:51:08.796100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.220 [2024-10-07 09:51:08.796113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.220 [2024-10-07 09:51:08.796126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.220 [2024-10-07 09:51:08.796139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.220 [2024-10-07 09:51:08.796151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:14.220 [2024-10-07 09:51:08.796223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbc4c0 (9): Bad file descriptor 00:30:14.220 [2024-10-07 09:51:08.797211] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:14.220 [2024-10-07 09:51:08.797252] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:14.220 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:14.221 09:51:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:15.593 09:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.593 09:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:15.593 09:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:16.159 [2024-10-07 09:51:10.852560] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:16.159 [2024-10-07 09:51:10.852592] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:16.159 [2024-10-07 09:51:10.852617] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:16.417 [2024-10-07 09:51:10.979070] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:16.417 09:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:16.417 [2024-10-07 09:51:11.204612] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:16.417 [2024-10-07 09:51:11.204667] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:16.417 [2024-10-07 09:51:11.204705] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:16.417 [2024-10-07 09:51:11.204731] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:16.417 [2024-10-07 09:51:11.204745] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:16.417 [2024-10-07 09:51:11.211265] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfd75f0 was disconnected and freed. delete nvme_qpair. 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1637598 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1637598 ']' 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1637598 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.349 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637598 00:30:17.608 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:17.608 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:17.608 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637598' 00:30:17.608 killing process with pid 1637598 00:30:17.608 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1637598 00:30:17.608 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1637598 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.866 rmmod nvme_tcp 00:30:17.866 rmmod nvme_fabrics 00:30:17.866 rmmod nvme_keyring 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1637571 ']' 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1637571 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1637571 ']' 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1637571 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637571 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637571' 00:30:17.866 killing process with pid 1637571 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1637571 00:30:17.866 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1637571 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.125 09:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.658 00:30:20.658 real 0m18.938s 00:30:20.658 user 0m27.623s 00:30:20.658 sys 0m3.589s 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:20.658 ************************************ 00:30:20.658 END TEST nvmf_discovery_remove_ifc 00:30:20.658 ************************************ 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.658 ************************************ 00:30:20.658 START TEST nvmf_identify_kernel_target 00:30:20.658 ************************************ 00:30:20.658 09:51:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:20.658 * Looking for test storage... 00:30:20.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.658 --rc genhtml_branch_coverage=1 00:30:20.658 --rc genhtml_function_coverage=1 00:30:20.658 --rc genhtml_legend=1 00:30:20.658 --rc geninfo_all_blocks=1 00:30:20.658 --rc geninfo_unexecuted_blocks=1 00:30:20.658 00:30:20.658 ' 00:30:20.658 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.658 --rc genhtml_branch_coverage=1 00:30:20.658 --rc genhtml_function_coverage=1 00:30:20.658 --rc genhtml_legend=1 00:30:20.658 --rc geninfo_all_blocks=1 00:30:20.658 --rc geninfo_unexecuted_blocks=1 00:30:20.658 00:30:20.658 ' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.659 --rc genhtml_branch_coverage=1 00:30:20.659 --rc genhtml_function_coverage=1 00:30:20.659 --rc genhtml_legend=1 00:30:20.659 --rc geninfo_all_blocks=1 00:30:20.659 --rc geninfo_unexecuted_blocks=1 00:30:20.659 00:30:20.659 ' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.659 --rc genhtml_branch_coverage=1 00:30:20.659 --rc genhtml_function_coverage=1 00:30:20.659 --rc genhtml_legend=1 00:30:20.659 --rc geninfo_all_blocks=1 00:30:20.659 --rc geninfo_unexecuted_blocks=1 00:30:20.659 00:30:20.659 ' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.659 09:51:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.190 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.190 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:23.191 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:23.191 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:23.191 Found net devices under 0000:84:00.0: cvl_0_0 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:23.191 Found net devices under 0000:84:00.1: cvl_0_1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:30:23.191 00:30:23.191 --- 10.0.0.2 ping statistics --- 00:30:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.191 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:23.191 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:23.191 00:30:23.191 --- 10.0.0.1 ping statistics --- 00:30:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.191 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:23.192 09:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:24.566 Waiting for block devices as requested 00:30:24.566 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:30:24.566 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:24.824 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:24.824 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:24.824 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:24.824 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:25.083 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:25.083 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:25.083 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:25.342 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:25.342 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:25.342 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:25.342 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:25.601 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:25.602 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:25.602 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:25.602 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:25.860 No valid GPT data, bailing 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:25.860 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:30:26.120 00:30:26.120 Discovery Log Number of Records 2, Generation counter 2 00:30:26.120 =====Discovery Log Entry 0====== 00:30:26.120 trtype: tcp 00:30:26.120 adrfam: ipv4 00:30:26.120 subtype: current discovery subsystem 00:30:26.120 treq: not specified, sq flow control disable supported 00:30:26.120 portid: 1 00:30:26.120 trsvcid: 4420 00:30:26.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:26.120 traddr: 10.0.0.1 00:30:26.120 eflags: none 00:30:26.120 sectype: none 00:30:26.120 =====Discovery Log Entry 1====== 00:30:26.120 trtype: tcp 00:30:26.120 adrfam: ipv4 00:30:26.120 subtype: nvme subsystem 00:30:26.120 treq: not specified, sq flow control disable supported 00:30:26.120 portid: 1 00:30:26.120 trsvcid: 4420 00:30:26.120 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:26.120 traddr: 10.0.0.1 00:30:26.120 eflags: none 00:30:26.120 sectype: none 00:30:26.120 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:26.120 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:26.120 ===================================================== 00:30:26.120 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:26.120 ===================================================== 00:30:26.120 Controller Capabilities/Features 00:30:26.120 ================================ 00:30:26.120 Vendor ID: 0000 00:30:26.120 Subsystem Vendor ID: 0000 00:30:26.120 Serial Number: ca217bbb59e2faaea49c 00:30:26.120 Model Number: Linux 00:30:26.120 Firmware Version: 6.8.9-20 00:30:26.120 Recommended Arb Burst: 0 00:30:26.120 IEEE OUI Identifier: 00 00 00 00:30:26.120 Multi-path I/O 00:30:26.120 May have multiple subsystem ports: No 00:30:26.120 May have multiple controllers: No 00:30:26.120 Associated with SR-IOV VF: No 00:30:26.120 Max Data Transfer Size: Unlimited 00:30:26.120 Max Number of Namespaces: 0 00:30:26.120 Max Number of I/O Queues: 1024 00:30:26.120 NVMe Specification Version (VS): 1.3 00:30:26.120 NVMe Specification Version (Identify): 1.3 00:30:26.120 Maximum Queue Entries: 1024 00:30:26.120 Contiguous Queues Required: No 00:30:26.120 Arbitration Mechanisms Supported 00:30:26.120 Weighted Round Robin: Not Supported 00:30:26.120 Vendor Specific: Not Supported 00:30:26.120 Reset Timeout: 7500 ms 00:30:26.120 Doorbell Stride: 4 bytes 00:30:26.120 NVM Subsystem Reset: Not Supported 00:30:26.120 Command Sets Supported 00:30:26.120 NVM Command Set: Supported 00:30:26.120 Boot Partition: Not Supported 00:30:26.120 Memory Page Size Minimum: 4096 bytes 00:30:26.120 Memory Page Size Maximum: 4096 bytes 00:30:26.120 Persistent Memory Region: Not Supported 00:30:26.120 Optional Asynchronous Events Supported 00:30:26.120 Namespace Attribute Notices: Not Supported 00:30:26.120 Firmware Activation Notices: Not Supported 00:30:26.120 ANA Change Notices: Not Supported 00:30:26.120 PLE Aggregate Log Change Notices: Not Supported 00:30:26.120 LBA Status Info Alert Notices: Not Supported 00:30:26.120 EGE Aggregate Log Change Notices: Not Supported 00:30:26.120 Normal NVM Subsystem Shutdown event: Not Supported 00:30:26.120 Zone Descriptor Change Notices: Not Supported 00:30:26.120 Discovery Log Change Notices: Supported 00:30:26.120 Controller Attributes 00:30:26.120 128-bit Host Identifier: Not Supported 00:30:26.120 Non-Operational Permissive Mode: Not Supported 00:30:26.120 NVM Sets: Not Supported 00:30:26.120 Read Recovery Levels: Not Supported 00:30:26.120 Endurance Groups: Not Supported 00:30:26.120 Predictable Latency Mode: Not Supported 00:30:26.120 Traffic Based Keep ALive: Not Supported 00:30:26.120 Namespace Granularity: Not Supported 00:30:26.120 SQ Associations: Not Supported 00:30:26.120 UUID List: Not Supported 00:30:26.120 Multi-Domain Subsystem: Not Supported 00:30:26.120 Fixed Capacity Management: Not Supported 00:30:26.120 Variable Capacity Management: Not Supported 00:30:26.120 Delete Endurance Group: Not Supported 00:30:26.120 Delete NVM Set: Not Supported 00:30:26.120 Extended LBA Formats Supported: Not Supported 00:30:26.120 Flexible Data Placement Supported: Not Supported 00:30:26.120 00:30:26.120 Controller Memory Buffer Support 00:30:26.120 ================================ 00:30:26.120 Supported: No 00:30:26.120 00:30:26.120 Persistent Memory Region Support 00:30:26.120 ================================ 00:30:26.120 Supported: No 00:30:26.120 00:30:26.120 Admin Command Set Attributes 00:30:26.120 ============================ 00:30:26.120 Security Send/Receive: Not Supported 00:30:26.120 Format NVM: Not Supported 00:30:26.120 Firmware Activate/Download: Not Supported 00:30:26.120 Namespace Management: Not Supported 00:30:26.120 Device Self-Test: Not Supported 00:30:26.120 Directives: Not Supported 00:30:26.120 NVMe-MI: Not Supported 00:30:26.120 Virtualization Management: Not Supported 00:30:26.120 Doorbell Buffer Config: Not Supported 00:30:26.120 Get LBA Status Capability: Not Supported 00:30:26.120 Command & Feature Lockdown Capability: Not Supported 00:30:26.120 Abort Command Limit: 1 00:30:26.120 Async Event Request Limit: 1 00:30:26.120 Number of Firmware Slots: N/A 00:30:26.120 Firmware Slot 1 Read-Only: N/A 00:30:26.120 Firmware Activation Without Reset: N/A 00:30:26.120 Multiple Update Detection Support: N/A 00:30:26.120 Firmware Update Granularity: No Information Provided 00:30:26.120 Per-Namespace SMART Log: No 00:30:26.120 Asymmetric Namespace Access Log Page: Not Supported 00:30:26.120 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:26.120 Command Effects Log Page: Not Supported 00:30:26.120 Get Log Page Extended Data: Supported 00:30:26.120 Telemetry Log Pages: Not Supported 00:30:26.120 Persistent Event Log Pages: Not Supported 00:30:26.120 Supported Log Pages Log Page: May Support 00:30:26.120 Commands Supported & Effects Log Page: Not Supported 00:30:26.120 Feature Identifiers & Effects Log Page:May Support 00:30:26.120 NVMe-MI Commands & Effects Log Page: May Support 00:30:26.120 Data Area 4 for Telemetry Log: Not Supported 00:30:26.120 Error Log Page Entries Supported: 1 00:30:26.120 Keep Alive: Not Supported 00:30:26.120 00:30:26.120 NVM Command Set Attributes 00:30:26.120 ========================== 00:30:26.120 Submission Queue Entry Size 00:30:26.120 Max: 1 00:30:26.120 Min: 1 00:30:26.120 Completion Queue Entry Size 00:30:26.120 Max: 1 00:30:26.120 Min: 1 00:30:26.120 Number of Namespaces: 0 00:30:26.120 Compare Command: Not Supported 00:30:26.120 Write Uncorrectable Command: Not Supported 00:30:26.120 Dataset Management Command: Not Supported 00:30:26.120 Write Zeroes Command: Not Supported 00:30:26.120 Set Features Save Field: Not Supported 00:30:26.120 Reservations: Not Supported 00:30:26.120 Timestamp: Not Supported 00:30:26.120 Copy: Not Supported 00:30:26.120 Volatile Write Cache: Not Present 00:30:26.120 Atomic Write Unit (Normal): 1 00:30:26.120 Atomic Write Unit (PFail): 1 00:30:26.120 Atomic Compare & Write Unit: 1 00:30:26.120 Fused Compare & Write: Not Supported 00:30:26.120 Scatter-Gather List 00:30:26.120 SGL Command Set: Supported 00:30:26.120 SGL Keyed: Not Supported 00:30:26.120 SGL Bit Bucket Descriptor: Not Supported 00:30:26.120 SGL Metadata Pointer: Not Supported 00:30:26.120 Oversized SGL: Not Supported 00:30:26.120 SGL Metadata Address: Not Supported 00:30:26.120 SGL Offset: Supported 00:30:26.120 Transport SGL Data Block: Not Supported 00:30:26.120 Replay Protected Memory Block: Not Supported 00:30:26.120 00:30:26.120 Firmware Slot Information 00:30:26.120 ========================= 00:30:26.120 Active slot: 0 00:30:26.120 00:30:26.120 00:30:26.120 Error Log 00:30:26.120 ========= 00:30:26.120 00:30:26.120 Active Namespaces 00:30:26.120 ================= 00:30:26.120 Discovery Log Page 00:30:26.120 ================== 00:30:26.120 Generation Counter: 2 00:30:26.120 Number of Records: 2 00:30:26.120 Record Format: 0 00:30:26.120 00:30:26.120 Discovery Log Entry 0 00:30:26.120 ---------------------- 00:30:26.121 Transport Type: 3 (TCP) 00:30:26.121 Address Family: 1 (IPv4) 00:30:26.121 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:26.121 Entry Flags: 00:30:26.121 Duplicate Returned Information: 0 00:30:26.121 Explicit Persistent Connection Support for Discovery: 0 00:30:26.121 Transport Requirements: 00:30:26.121 Secure Channel: Not Specified 00:30:26.121 Port ID: 1 (0x0001) 00:30:26.121 Controller ID: 65535 (0xffff) 00:30:26.121 Admin Max SQ Size: 32 00:30:26.121 Transport Service Identifier: 4420 00:30:26.121 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:26.121 Transport Address: 10.0.0.1 00:30:26.121 Discovery Log Entry 1 00:30:26.121 ---------------------- 00:30:26.121 Transport Type: 3 (TCP) 00:30:26.121 Address Family: 1 (IPv4) 00:30:26.121 Subsystem Type: 2 (NVM Subsystem) 00:30:26.121 Entry Flags: 00:30:26.121 Duplicate Returned Information: 0 00:30:26.121 Explicit Persistent Connection Support for Discovery: 0 00:30:26.121 Transport Requirements: 00:30:26.121 Secure Channel: Not Specified 00:30:26.121 Port ID: 1 (0x0001) 00:30:26.121 Controller ID: 65535 (0xffff) 00:30:26.121 Admin Max SQ Size: 32 00:30:26.121 Transport Service Identifier: 4420 00:30:26.121 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:26.121 Transport Address: 10.0.0.1 00:30:26.121 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.121 get_feature(0x01) failed 00:30:26.121 get_feature(0x02) failed 00:30:26.121 get_feature(0x04) failed 00:30:26.121 ===================================================== 00:30:26.121 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:26.121 ===================================================== 00:30:26.121 Controller Capabilities/Features 00:30:26.121 ================================ 00:30:26.121 Vendor ID: 0000 00:30:26.121 Subsystem Vendor ID: 0000 00:30:26.121 Serial Number: a5a65fab2379373cfa77 00:30:26.121 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:26.121 Firmware Version: 6.8.9-20 00:30:26.121 Recommended Arb Burst: 6 00:30:26.121 IEEE OUI Identifier: 00 00 00 00:30:26.121 Multi-path I/O 00:30:26.121 May have multiple subsystem ports: Yes 00:30:26.121 May have multiple controllers: Yes 00:30:26.121 Associated with SR-IOV VF: No 00:30:26.121 Max Data Transfer Size: Unlimited 00:30:26.121 Max Number of Namespaces: 1024 00:30:26.121 Max Number of I/O Queues: 128 00:30:26.121 NVMe Specification Version (VS): 1.3 00:30:26.121 NVMe Specification Version (Identify): 1.3 00:30:26.121 Maximum Queue Entries: 1024 00:30:26.121 Contiguous Queues Required: No 00:30:26.121 Arbitration Mechanisms Supported 00:30:26.121 Weighted Round Robin: Not Supported 00:30:26.121 Vendor Specific: Not Supported 00:30:26.121 Reset Timeout: 7500 ms 00:30:26.121 Doorbell Stride: 4 bytes 00:30:26.121 NVM Subsystem Reset: Not Supported 00:30:26.121 Command Sets Supported 00:30:26.121 NVM Command Set: Supported 00:30:26.121 Boot Partition: Not Supported 00:30:26.121 Memory Page Size Minimum: 4096 bytes 00:30:26.121 Memory Page Size Maximum: 4096 bytes 00:30:26.121 Persistent Memory Region: Not Supported 00:30:26.121 Optional Asynchronous Events Supported 00:30:26.121 Namespace Attribute Notices: Supported 00:30:26.121 Firmware Activation Notices: Not Supported 00:30:26.121 ANA Change Notices: Supported 00:30:26.121 PLE Aggregate Log Change Notices: Not Supported 00:30:26.121 LBA Status Info Alert Notices: Not Supported 00:30:26.121 EGE Aggregate Log Change Notices: Not Supported 00:30:26.121 Normal NVM Subsystem Shutdown event: Not Supported 00:30:26.121 Zone Descriptor Change Notices: Not Supported 00:30:26.121 Discovery Log Change Notices: Not Supported 00:30:26.121 Controller Attributes 00:30:26.121 128-bit Host Identifier: Supported 00:30:26.121 Non-Operational Permissive Mode: Not Supported 00:30:26.121 NVM Sets: Not Supported 00:30:26.121 Read Recovery Levels: Not Supported 00:30:26.121 Endurance Groups: Not Supported 00:30:26.121 Predictable Latency Mode: Not Supported 00:30:26.121 Traffic Based Keep ALive: Supported 00:30:26.121 Namespace Granularity: Not Supported 00:30:26.121 SQ Associations: Not Supported 00:30:26.121 UUID List: Not Supported 00:30:26.121 Multi-Domain Subsystem: Not Supported 00:30:26.121 Fixed Capacity Management: Not Supported 00:30:26.121 Variable Capacity Management: Not Supported 00:30:26.121 Delete Endurance Group: Not Supported 00:30:26.121 Delete NVM Set: Not Supported 00:30:26.121 Extended LBA Formats Supported: Not Supported 00:30:26.121 Flexible Data Placement Supported: Not Supported 00:30:26.121 00:30:26.121 Controller Memory Buffer Support 00:30:26.121 ================================ 00:30:26.121 Supported: No 00:30:26.121 00:30:26.121 Persistent Memory Region Support 00:30:26.121 ================================ 00:30:26.121 Supported: No 00:30:26.121 00:30:26.121 Admin Command Set Attributes 00:30:26.121 ============================ 00:30:26.121 Security Send/Receive: Not Supported 00:30:26.121 Format NVM: Not Supported 00:30:26.121 Firmware Activate/Download: Not Supported 00:30:26.121 Namespace Management: Not Supported 00:30:26.121 Device Self-Test: Not Supported 00:30:26.121 Directives: Not Supported 00:30:26.121 NVMe-MI: Not Supported 00:30:26.121 Virtualization Management: Not Supported 00:30:26.121 Doorbell Buffer Config: Not Supported 00:30:26.121 Get LBA Status Capability: Not Supported 00:30:26.121 Command & Feature Lockdown Capability: Not Supported 00:30:26.121 Abort Command Limit: 4 00:30:26.121 Async Event Request Limit: 4 00:30:26.121 Number of Firmware Slots: N/A 00:30:26.121 Firmware Slot 1 Read-Only: N/A 00:30:26.121 Firmware Activation Without Reset: N/A 00:30:26.121 Multiple Update Detection Support: N/A 00:30:26.121 Firmware Update Granularity: No Information Provided 00:30:26.121 Per-Namespace SMART Log: Yes 00:30:26.121 Asymmetric Namespace Access Log Page: Supported 00:30:26.121 ANA Transition Time : 10 sec 00:30:26.121 00:30:26.121 Asymmetric Namespace Access Capabilities 00:30:26.121 ANA Optimized State : Supported 00:30:26.121 ANA Non-Optimized State : Supported 00:30:26.121 ANA Inaccessible State : Supported 00:30:26.121 ANA Persistent Loss State : Supported 00:30:26.121 ANA Change State : Supported 00:30:26.121 ANAGRPID is not changed : No 00:30:26.121 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:26.121 00:30:26.121 ANA Group Identifier Maximum : 128 00:30:26.121 Number of ANA Group Identifiers : 128 00:30:26.121 Max Number of Allowed Namespaces : 1024 00:30:26.121 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:26.121 Command Effects Log Page: Supported 00:30:26.121 Get Log Page Extended Data: Supported 00:30:26.121 Telemetry Log Pages: Not Supported 00:30:26.121 Persistent Event Log Pages: Not Supported 00:30:26.121 Supported Log Pages Log Page: May Support 00:30:26.121 Commands Supported & Effects Log Page: Not Supported 00:30:26.121 Feature Identifiers & Effects Log Page:May Support 00:30:26.121 NVMe-MI Commands & Effects Log Page: May Support 00:30:26.121 Data Area 4 for Telemetry Log: Not Supported 00:30:26.121 Error Log Page Entries Supported: 128 00:30:26.121 Keep Alive: Supported 00:30:26.121 Keep Alive Granularity: 1000 ms 00:30:26.121 00:30:26.121 NVM Command Set Attributes 00:30:26.121 ========================== 00:30:26.121 Submission Queue Entry Size 00:30:26.121 Max: 64 00:30:26.121 Min: 64 00:30:26.121 Completion Queue Entry Size 00:30:26.121 Max: 16 00:30:26.121 Min: 16 00:30:26.121 Number of Namespaces: 1024 00:30:26.121 Compare Command: Not Supported 00:30:26.121 Write Uncorrectable Command: Not Supported 00:30:26.121 Dataset Management Command: Supported 00:30:26.121 Write Zeroes Command: Supported 00:30:26.121 Set Features Save Field: Not Supported 00:30:26.121 Reservations: Not Supported 00:30:26.121 Timestamp: Not Supported 00:30:26.121 Copy: Not Supported 00:30:26.121 Volatile Write Cache: Present 00:30:26.121 Atomic Write Unit (Normal): 1 00:30:26.121 Atomic Write Unit (PFail): 1 00:30:26.121 Atomic Compare & Write Unit: 1 00:30:26.121 Fused Compare & Write: Not Supported 00:30:26.121 Scatter-Gather List 00:30:26.121 SGL Command Set: Supported 00:30:26.121 SGL Keyed: Not Supported 00:30:26.121 SGL Bit Bucket Descriptor: Not Supported 00:30:26.121 SGL Metadata Pointer: Not Supported 00:30:26.121 Oversized SGL: Not Supported 00:30:26.121 SGL Metadata Address: Not Supported 00:30:26.121 SGL Offset: Supported 00:30:26.121 Transport SGL Data Block: Not Supported 00:30:26.121 Replay Protected Memory Block: Not Supported 00:30:26.121 00:30:26.121 Firmware Slot Information 00:30:26.121 ========================= 00:30:26.121 Active slot: 0 00:30:26.121 00:30:26.121 Asymmetric Namespace Access 00:30:26.121 =========================== 00:30:26.121 Change Count : 0 00:30:26.121 Number of ANA Group Descriptors : 1 00:30:26.121 ANA Group Descriptor : 0 00:30:26.122 ANA Group ID : 1 00:30:26.122 Number of NSID Values : 1 00:30:26.122 Change Count : 0 00:30:26.122 ANA State : 1 00:30:26.122 Namespace Identifier : 1 00:30:26.122 00:30:26.122 Commands Supported and Effects 00:30:26.122 ============================== 00:30:26.122 Admin Commands 00:30:26.122 -------------- 00:30:26.122 Get Log Page (02h): Supported 00:30:26.122 Identify (06h): Supported 00:30:26.122 Abort (08h): Supported 00:30:26.122 Set Features (09h): Supported 00:30:26.122 Get Features (0Ah): Supported 00:30:26.122 Asynchronous Event Request (0Ch): Supported 00:30:26.122 Keep Alive (18h): Supported 00:30:26.122 I/O Commands 00:30:26.122 ------------ 00:30:26.122 Flush (00h): Supported 00:30:26.122 Write (01h): Supported LBA-Change 00:30:26.122 Read (02h): Supported 00:30:26.122 Write Zeroes (08h): Supported LBA-Change 00:30:26.122 Dataset Management (09h): Supported 00:30:26.122 00:30:26.122 Error Log 00:30:26.122 ========= 00:30:26.122 Entry: 0 00:30:26.122 Error Count: 0x3 00:30:26.122 Submission Queue Id: 0x0 00:30:26.122 Command Id: 0x5 00:30:26.122 Phase Bit: 0 00:30:26.122 Status Code: 0x2 00:30:26.122 Status Code Type: 0x0 00:30:26.122 Do Not Retry: 1 00:30:26.122 Error Location: 0x28 00:30:26.122 LBA: 0x0 00:30:26.122 Namespace: 0x0 00:30:26.122 Vendor Log Page: 0x0 00:30:26.122 ----------- 00:30:26.122 Entry: 1 00:30:26.122 Error Count: 0x2 00:30:26.122 Submission Queue Id: 0x0 00:30:26.122 Command Id: 0x5 00:30:26.122 Phase Bit: 0 00:30:26.122 Status Code: 0x2 00:30:26.122 Status Code Type: 0x0 00:30:26.122 Do Not Retry: 1 00:30:26.122 Error Location: 0x28 00:30:26.122 LBA: 0x0 00:30:26.122 Namespace: 0x0 00:30:26.122 Vendor Log Page: 0x0 00:30:26.122 ----------- 00:30:26.122 Entry: 2 00:30:26.122 Error Count: 0x1 00:30:26.122 Submission Queue Id: 0x0 00:30:26.122 Command Id: 0x4 00:30:26.122 Phase Bit: 0 00:30:26.122 Status Code: 0x2 00:30:26.122 Status Code Type: 0x0 00:30:26.122 Do Not Retry: 1 00:30:26.122 Error Location: 0x28 00:30:26.122 LBA: 0x0 00:30:26.122 Namespace: 0x0 00:30:26.122 Vendor Log Page: 0x0 00:30:26.122 00:30:26.122 Number of Queues 00:30:26.122 ================ 00:30:26.122 Number of I/O Submission Queues: 128 00:30:26.122 Number of I/O Completion Queues: 128 00:30:26.122 00:30:26.122 ZNS Specific Controller Data 00:30:26.122 ============================ 00:30:26.122 Zone Append Size Limit: 0 00:30:26.122 00:30:26.122 00:30:26.122 Active Namespaces 00:30:26.122 ================= 00:30:26.122 get_feature(0x05) failed 00:30:26.122 Namespace ID:1 00:30:26.122 Command Set Identifier: NVM (00h) 00:30:26.122 Deallocate: Supported 00:30:26.122 Deallocated/Unwritten Error: Not Supported 00:30:26.122 Deallocated Read Value: Unknown 00:30:26.122 Deallocate in Write Zeroes: Not Supported 00:30:26.122 Deallocated Guard Field: 0xFFFF 00:30:26.122 Flush: Supported 00:30:26.122 Reservation: Not Supported 00:30:26.122 Namespace Sharing Capabilities: Multiple Controllers 00:30:26.122 Size (in LBAs): 1953525168 (931GiB) 00:30:26.122 Capacity (in LBAs): 1953525168 (931GiB) 00:30:26.122 Utilization (in LBAs): 1953525168 (931GiB) 00:30:26.122 UUID: 1e0a38f8-89bc-4ee6-bcad-9f41f501459c 00:30:26.122 Thin Provisioning: Not Supported 00:30:26.122 Per-NS Atomic Units: Yes 00:30:26.122 Atomic Boundary Size (Normal): 0 00:30:26.122 Atomic Boundary Size (PFail): 0 00:30:26.122 Atomic Boundary Offset: 0 00:30:26.122 NGUID/EUI64 Never Reused: No 00:30:26.122 ANA group ID: 1 00:30:26.122 Namespace Write Protected: No 00:30:26.122 Number of LBA Formats: 1 00:30:26.122 Current LBA Format: LBA Format #00 00:30:26.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:26.122 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.122 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.122 rmmod nvme_tcp 00:30:26.122 rmmod nvme_fabrics 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.381 09:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.281 09:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.281 09:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:28.281 09:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:28.281 09:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:30:28.281 09:51:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:29.667 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:29.667 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:29.667 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:29.925 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:29.925 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:29.925 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:29.925 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:29.925 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:29.925 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:30.859 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:30:30.859 00:30:30.859 real 0m10.672s 00:30:30.859 user 0m2.337s 00:30:30.859 sys 0m4.436s 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.859 ************************************ 00:30:30.859 END TEST nvmf_identify_kernel_target 00:30:30.859 ************************************ 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.859 ************************************ 00:30:30.859 START TEST nvmf_auth_host 00:30:30.859 ************************************ 00:30:30.859 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:31.118 * Looking for test storage... 00:30:31.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.118 --rc genhtml_branch_coverage=1 00:30:31.118 --rc genhtml_function_coverage=1 00:30:31.118 --rc genhtml_legend=1 00:30:31.118 --rc geninfo_all_blocks=1 00:30:31.118 --rc geninfo_unexecuted_blocks=1 00:30:31.118 00:30:31.118 ' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.118 --rc genhtml_branch_coverage=1 00:30:31.118 --rc genhtml_function_coverage=1 00:30:31.118 --rc genhtml_legend=1 00:30:31.118 --rc geninfo_all_blocks=1 00:30:31.118 --rc geninfo_unexecuted_blocks=1 00:30:31.118 00:30:31.118 ' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.118 --rc genhtml_branch_coverage=1 00:30:31.118 --rc genhtml_function_coverage=1 00:30:31.118 --rc genhtml_legend=1 00:30:31.118 --rc geninfo_all_blocks=1 00:30:31.118 --rc geninfo_unexecuted_blocks=1 00:30:31.118 00:30:31.118 ' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.118 --rc genhtml_branch_coverage=1 00:30:31.118 --rc genhtml_function_coverage=1 00:30:31.118 --rc genhtml_legend=1 00:30:31.118 --rc geninfo_all_blocks=1 00:30:31.118 --rc geninfo_unexecuted_blocks=1 00:30:31.118 00:30:31.118 ' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.118 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:31.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.119 09:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:34.403 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:34.403 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:34.403 Found net devices under 0000:84:00.0: cvl_0_0 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:34.403 Found net devices under 0000:84:00.1: cvl_0_1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.403 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:30:34.403 00:30:34.403 --- 10.0.0.2 ping statistics --- 00:30:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.404 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:30:34.404 00:30:34.404 --- 10.0.0.1 ping statistics --- 00:30:34.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.404 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1645622 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1645622 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1645622 ']' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.404 09:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cf7721fa3c530468a636c07b0c958f4e 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.32S 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cf7721fa3c530468a636c07b0c958f4e 0 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cf7721fa3c530468a636c07b0c958f4e 0 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cf7721fa3c530468a636c07b0c958f4e 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.32S 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.32S 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.32S 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d8c46d6a18a8e02f3da4d07fcfbd8b4b654d0958c4c3f866f722e5b59b2ca857 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ODC 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d8c46d6a18a8e02f3da4d07fcfbd8b4b654d0958c4c3f866f722e5b59b2ca857 3 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d8c46d6a18a8e02f3da4d07fcfbd8b4b654d0958c4c3f866f722e5b59b2ca857 3 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d8c46d6a18a8e02f3da4d07fcfbd8b4b654d0958c4c3f866f722e5b59b2ca857 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:30:34.404 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ODC 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ODC 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ODC 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d1b2eea11a41f6c8caebead5ea3447c9e78382b785789a1b 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.HaQ 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d1b2eea11a41f6c8caebead5ea3447c9e78382b785789a1b 0 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d1b2eea11a41f6c8caebead5ea3447c9e78382b785789a1b 0 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d1b2eea11a41f6c8caebead5ea3447c9e78382b785789a1b 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.HaQ 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.HaQ 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HaQ 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=16f176f86e8ea61cf5093440d153676103658f9f973ffed8 00:30:34.662 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Bgt 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 16f176f86e8ea61cf5093440d153676103658f9f973ffed8 2 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 16f176f86e8ea61cf5093440d153676103658f9f973ffed8 2 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=16f176f86e8ea61cf5093440d153676103658f9f973ffed8 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Bgt 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Bgt 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bgt 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e576a7528b86d2d3a208c3f98416fd32 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XTY 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e576a7528b86d2d3a208c3f98416fd32 1 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e576a7528b86d2d3a208c3f98416fd32 1 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e576a7528b86d2d3a208c3f98416fd32 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:30:34.663 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XTY 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XTY 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.XTY 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8bd0865c006c8065db36d10ca434cd52 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.ja0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8bd0865c006c8065db36d10ca434cd52 1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8bd0865c006c8065db36d10ca434cd52 1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8bd0865c006c8065db36d10ca434cd52 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.ja0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.ja0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ja0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=81aa295f4a0b343d4b24b1f0169fa6e42824fe0218922b9a 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.wLs 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 81aa295f4a0b343d4b24b1f0169fa6e42824fe0218922b9a 2 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 81aa295f4a0b343d4b24b1f0169fa6e42824fe0218922b9a 2 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=81aa295f4a0b343d4b24b1f0169fa6e42824fe0218922b9a 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.wLs 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.wLs 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wLs 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=140324d7470c4b0532712614badcb0ee 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.2rp 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 140324d7470c4b0532712614badcb0ee 0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 140324d7470c4b0532712614badcb0ee 0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=140324d7470c4b0532712614badcb0ee 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.2rp 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.2rp 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2rp 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:30:34.921 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6c6324ab78e937908b9bea65eae9ca70b42a7868d6b453fa4f3a13c705f3b075 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.NxZ 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6c6324ab78e937908b9bea65eae9ca70b42a7868d6b453fa4f3a13c705f3b075 3 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6c6324ab78e937908b9bea65eae9ca70b42a7868d6b453fa4f3a13c705f3b075 3 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6c6324ab78e937908b9bea65eae9ca70b42a7868d6b453fa4f3a13c705f3b075 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.NxZ 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.NxZ 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NxZ 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:35.179 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1645622 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1645622 ']' 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:35.180 09:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.32S 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ODC ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ODC 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HaQ 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bgt ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bgt 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XTY 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ja0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ja0 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wLs 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2rp ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2rp 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NxZ 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:35.746 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:30:35.747 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:30:35.747 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:30:35.747 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:35.747 09:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:37.120 Waiting for block devices as requested 00:30:37.120 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:30:37.120 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:37.120 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:37.379 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:37.379 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:37.379 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:37.379 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:37.636 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:37.636 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:37.637 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:37.894 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:37.894 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:37.894 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:37.894 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:38.152 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:38.152 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:38.152 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:38.719 No valid GPT data, bailing 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:30:38.719 00:30:38.719 Discovery Log Number of Records 2, Generation counter 2 00:30:38.719 =====Discovery Log Entry 0====== 00:30:38.719 trtype: tcp 00:30:38.719 adrfam: ipv4 00:30:38.719 subtype: current discovery subsystem 00:30:38.719 treq: not specified, sq flow control disable supported 00:30:38.719 portid: 1 00:30:38.719 trsvcid: 4420 00:30:38.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:38.719 traddr: 10.0.0.1 00:30:38.719 eflags: none 00:30:38.719 sectype: none 00:30:38.719 =====Discovery Log Entry 1====== 00:30:38.719 trtype: tcp 00:30:38.719 adrfam: ipv4 00:30:38.719 subtype: nvme subsystem 00:30:38.719 treq: not specified, sq flow control disable supported 00:30:38.719 portid: 1 00:30:38.719 trsvcid: 4420 00:30:38.719 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:38.719 traddr: 10.0.0.1 00:30:38.719 eflags: none 00:30:38.719 sectype: none 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.719 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.978 nvme0n1 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.978 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.236 nvme0n1 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.236 09:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.236 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.237 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.495 nvme0n1 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.495 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.496 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.754 nvme0n1 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.754 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.013 nvme0n1 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.013 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 nvme0n1 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.273 09:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.273 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 nvme0n1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.533 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.792 nvme0n1 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:40.792 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.793 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.081 nvme0n1 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.081 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.082 09:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.341 nvme0n1 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.341 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.599 nvme0n1 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:41.599 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.856 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.113 nvme0n1 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:42.113 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.114 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.371 09:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.629 nvme0n1 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.629 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.886 nvme0n1 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.886 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.142 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.143 09:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.399 nvme0n1 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.399 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.400 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:43.656 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:43.656 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.657 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.943 nvme0n1 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.943 09:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.530 nvme0n1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.530 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.096 nvme0n1 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.096 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:45.353 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:45.354 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:45.354 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.354 09:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.920 nvme0n1 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:45.920 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.921 09:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.487 nvme0n1 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.487 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:46.745 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.746 09:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.311 nvme0n1 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.311 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.569 09:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.503 nvme0n1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.503 09:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.877 nvme0n1 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:49.877 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.878 09:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.813 nvme0n1 00:30:50.813 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.813 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.813 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.814 09:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.749 nvme0n1 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.749 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.007 09:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.943 nvme0n1 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.943 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.202 nvme0n1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.202 09:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.463 nvme0n1 00:30:53.463 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.463 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.463 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.463 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.463 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.464 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.465 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.723 nvme0n1 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.723 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.724 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.982 nvme0n1 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.982 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.240 nvme0n1 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.240 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.241 09:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.499 nvme0n1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.499 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.757 nvme0n1 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.757 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.758 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.016 nvme0n1 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.016 09:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.275 nvme0n1 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.275 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.534 nvme0n1 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.534 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.792 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.050 nvme0n1 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.050 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.051 09:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.618 nvme0n1 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.618 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.619 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.877 nvme0n1 00:30:56.877 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.877 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.878 09:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.445 nvme0n1 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:30:57.445 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.446 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.711 nvme0n1 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:57.711 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.712 09:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.645 nvme0n1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.645 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.212 nvme0n1 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.212 09:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.779 nvme0n1 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.779 09:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.346 nvme0n1 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.346 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.604 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.171 nvme0n1 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:01.171 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.172 09:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.545 nvme0n1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.545 09:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.919 nvme0n1 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.919 09:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.853 nvme0n1 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.853 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 09:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.226 nvme0n1 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:06.226 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.227 09:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 nvme0n1 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.158 09:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.415 nvme0n1 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.415 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.416 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.673 nvme0n1 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.673 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.931 nvme0n1 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:07.931 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.932 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.190 nvme0n1 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:08.190 09:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.190 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.449 nvme0n1 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.449 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.450 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 nvme0n1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.708 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.967 nvme0n1 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.967 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.225 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.226 nvme0n1 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.226 09:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.226 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.226 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.226 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.226 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.226 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.484 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.485 nvme0n1 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.485 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.743 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.002 nvme0n1 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.002 09:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.261 nvme0n1 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:10.261 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.518 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.519 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.777 nvme0n1 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.034 nvme0n1 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.034 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.035 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:11.292 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:11.293 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:11.293 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.293 09:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.550 nvme0n1 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:11.550 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.551 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.808 nvme0n1 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.808 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.066 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.066 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.067 09:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.634 nvme0n1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.634 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.252 nvme0n1 00:31:13.252 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.252 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.253 09:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.817 nvme0n1 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.817 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.074 09:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.640 nvme0n1 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.640 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.207 nvme0n1 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.207 09:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:15.207 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzIxZmEzYzUzMDQ2OGE2MzZjMDdiMGM5NThmNGVcOpui: 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: ]] 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDhjNDZkNmExOGE4ZTAyZjNkYTRkMDdmY2ZiZDhiNGI2NTRkMDk1OGM0YzNmODY2ZjcyMmU1YjU5YjJjYTg1N9EQ/KM=: 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.208 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.466 09:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.400 nvme0n1 00:31:16.400 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.400 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.400 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.400 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.400 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.658 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.659 09:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.592 nvme0n1 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.592 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.593 09:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.965 nvme0n1 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFhYTI5NWY0YTBiMzQzZDRiMjRiMWYwMTY5ZmE2ZTQyODI0ZmUwMjE4OTIyYjlh1T6qsg==: 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQwMzI0ZDc0NzBjNGIwNTMyNzEyNjE0YmFkY2IwZWXAr7Ek: 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.965 09:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.898 nvme0n1 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.898 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM2MzI0YWI3OGU5Mzc5MDhiOWJlYTY1ZWFlOWNhNzBiNDJhNzg2OGQ2YjQ1M2ZhNGYzYTEzYzcwNWYzYjA3NXf2/tY=: 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:20.156 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:20.157 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:20.157 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.157 09:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.090 nvme0n1 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.090 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.349 09:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.349 request: 00:31:21.349 { 00:31:21.349 "name": "nvme0", 00:31:21.349 "trtype": "tcp", 00:31:21.349 "traddr": "10.0.0.1", 00:31:21.349 "adrfam": "ipv4", 00:31:21.349 "trsvcid": "4420", 00:31:21.349 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:21.349 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:21.349 "prchk_reftag": false, 00:31:21.349 "prchk_guard": false, 00:31:21.349 "hdgst": false, 00:31:21.349 "ddgst": false, 00:31:21.349 "allow_unrecognized_csi": false, 00:31:21.349 "method": "bdev_nvme_attach_controller", 00:31:21.349 "req_id": 1 00:31:21.349 } 00:31:21.349 Got JSON-RPC error response 00:31:21.349 response: 00:31:21.349 { 00:31:21.349 "code": -5, 00:31:21.349 "message": "Input/output error" 00:31:21.349 } 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:21.349 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.350 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.608 request: 00:31:21.608 { 00:31:21.608 "name": "nvme0", 00:31:21.608 "trtype": "tcp", 00:31:21.608 "traddr": "10.0.0.1", 00:31:21.608 "adrfam": "ipv4", 00:31:21.608 "trsvcid": "4420", 00:31:21.608 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:21.608 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:21.608 "prchk_reftag": false, 00:31:21.608 "prchk_guard": false, 00:31:21.608 "hdgst": false, 00:31:21.608 "ddgst": false, 00:31:21.608 "dhchap_key": "key2", 00:31:21.608 "allow_unrecognized_csi": false, 00:31:21.608 "method": "bdev_nvme_attach_controller", 00:31:21.608 "req_id": 1 00:31:21.608 } 00:31:21.608 Got JSON-RPC error response 00:31:21.608 response: 00:31:21.608 { 00:31:21.608 "code": -5, 00:31:21.608 "message": "Input/output error" 00:31:21.608 } 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.608 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.608 request: 00:31:21.608 { 00:31:21.608 "name": "nvme0", 00:31:21.608 "trtype": "tcp", 00:31:21.608 "traddr": "10.0.0.1", 00:31:21.608 "adrfam": "ipv4", 00:31:21.608 "trsvcid": "4420", 00:31:21.608 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:21.608 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:21.608 "prchk_reftag": false, 00:31:21.608 "prchk_guard": false, 00:31:21.608 "hdgst": false, 00:31:21.608 "ddgst": false, 00:31:21.608 "dhchap_key": "key1", 00:31:21.608 "dhchap_ctrlr_key": "ckey2", 00:31:21.608 "allow_unrecognized_csi": false, 00:31:21.608 "method": "bdev_nvme_attach_controller", 00:31:21.608 "req_id": 1 00:31:21.608 } 00:31:21.608 Got JSON-RPC error response 00:31:21.608 response: 00:31:21.608 { 00:31:21.608 "code": -5, 00:31:21.609 "message": "Input/output error" 00:31:21.609 } 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.609 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.867 nvme0n1 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.867 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.125 request: 00:31:22.125 { 00:31:22.125 "name": "nvme0", 00:31:22.125 "dhchap_key": "key1", 00:31:22.125 "dhchap_ctrlr_key": "ckey2", 00:31:22.125 "method": "bdev_nvme_set_keys", 00:31:22.125 "req_id": 1 00:31:22.125 } 00:31:22.125 Got JSON-RPC error response 00:31:22.125 response: 00:31:22.125 { 00:31:22.125 "code": -13, 00:31:22.125 "message": "Permission denied" 00:31:22.125 } 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:22.125 09:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:23.058 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:23.059 09:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFiMmVlYTExYTQxZjZjOGNhZWJlYWQ1ZWEzNDQ3YzllNzgzODJiNzg1Nzg5YTFiOe+Z7Q==: 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: ]] 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTZmMTc2Zjg2ZThlYTYxY2Y1MDkzNDQwZDE1MzY3NjEwMzY1OGY5Zjk3M2ZmZWQ4/fQrPQ==: 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 09:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 nvme0n1 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU3NmE3NTI4Yjg2ZDJkM2EyMDhjM2Y5ODQxNmZkMzJELHCL: 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: ]] 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGJkMDg2NWMwMDZjODA2NWRiMzZkMTBjYTQzNGNkNTLSUtYx: 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 request: 00:31:24.432 { 00:31:24.432 "name": "nvme0", 00:31:24.432 "dhchap_key": "key2", 00:31:24.432 "dhchap_ctrlr_key": "ckey1", 00:31:24.432 "method": "bdev_nvme_set_keys", 00:31:24.432 "req_id": 1 00:31:24.432 } 00:31:24.432 Got JSON-RPC error response 00:31:24.432 response: 00:31:24.432 { 00:31:24.432 "code": -13, 00:31:24.432 "message": "Permission denied" 00:31:24.432 } 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:24.432 09:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.807 rmmod nvme_tcp 00:31:25.807 rmmod nvme_fabrics 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1645622 ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1645622 ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645622' 00:31:25.807 killing process with pid 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1645622 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.807 09:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:31:28.340 09:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:29.716 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:29.716 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:29.716 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:30.653 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:30.912 09:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.32S /tmp/spdk.key-null.HaQ /tmp/spdk.key-sha256.XTY /tmp/spdk.key-sha384.wLs /tmp/spdk.key-sha512.NxZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:30.912 09:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:32.288 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:32.288 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:32.288 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:32.288 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:32.288 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:32.288 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:32.288 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:32.288 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:32.288 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:32.288 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:32.288 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:32.288 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:32.288 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:32.288 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:32.288 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:32.288 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:32.288 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:32.288 00:31:32.288 real 1m1.402s 00:31:32.288 user 0m59.937s 00:31:32.288 sys 0m7.556s 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.288 ************************************ 00:31:32.288 END TEST nvmf_auth_host 00:31:32.288 ************************************ 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:32.288 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.548 ************************************ 00:31:32.548 START TEST nvmf_digest 00:31:32.548 ************************************ 00:31:32.548 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:32.548 * Looking for test storage... 00:31:32.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.548 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:32.548 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:31:32.548 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.807 --rc genhtml_branch_coverage=1 00:31:32.807 --rc genhtml_function_coverage=1 00:31:32.807 --rc genhtml_legend=1 00:31:32.807 --rc geninfo_all_blocks=1 00:31:32.807 --rc geninfo_unexecuted_blocks=1 00:31:32.807 00:31:32.807 ' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.807 --rc genhtml_branch_coverage=1 00:31:32.807 --rc genhtml_function_coverage=1 00:31:32.807 --rc genhtml_legend=1 00:31:32.807 --rc geninfo_all_blocks=1 00:31:32.807 --rc geninfo_unexecuted_blocks=1 00:31:32.807 00:31:32.807 ' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.807 --rc genhtml_branch_coverage=1 00:31:32.807 --rc genhtml_function_coverage=1 00:31:32.807 --rc genhtml_legend=1 00:31:32.807 --rc geninfo_all_blocks=1 00:31:32.807 --rc geninfo_unexecuted_blocks=1 00:31:32.807 00:31:32.807 ' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:32.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.807 --rc genhtml_branch_coverage=1 00:31:32.807 --rc genhtml_function_coverage=1 00:31:32.807 --rc genhtml_legend=1 00:31:32.807 --rc geninfo_all_blocks=1 00:31:32.807 --rc geninfo_unexecuted_blocks=1 00:31:32.807 00:31:32.807 ' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.807 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:32.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.808 09:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:35.336 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.336 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:35.336 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:35.337 Found net devices under 0000:84:00.0: cvl_0_0 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:35.337 Found net devices under 0000:84:00.1: cvl_0_1 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.337 09:52:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:31:35.337 00:31:35.337 --- 10.0.0.2 ping statistics --- 00:31:35.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.337 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:35.337 00:31:35.337 --- 10.0.0.1 ping statistics --- 00:31:35.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.337 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:35.337 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.594 ************************************ 00:31:35.594 START TEST nvmf_digest_clean 00:31:35.594 ************************************ 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1656463 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1656463 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1656463 ']' 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.594 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:35.594 [2024-10-07 09:52:30.246103] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:35.594 [2024-10-07 09:52:30.246258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.594 [2024-10-07 09:52:30.350206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.851 [2024-10-07 09:52:30.471049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.851 [2024-10-07 09:52:30.471113] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.851 [2024-10-07 09:52:30.471130] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.851 [2024-10-07 09:52:30.471143] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.851 [2024-10-07 09:52:30.471155] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.851 [2024-10-07 09:52:30.471853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.851 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:36.109 null0 00:31:36.109 [2024-10-07 09:52:30.683562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.109 [2024-10-07 09:52:30.707780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1656488 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1656488 /var/tmp/bperf.sock 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1656488 ']' 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:36.109 09:52:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:36.109 [2024-10-07 09:52:30.770667] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:36.109 [2024-10-07 09:52:30.770768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656488 ] 00:31:36.109 [2024-10-07 09:52:30.844226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.367 [2024-10-07 09:52:30.967798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.367 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:36.367 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:31:36.367 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:36.367 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:36.367 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:36.932 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.932 09:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:37.498 nvme0n1 00:31:37.499 09:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:37.499 09:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:37.756 Running I/O for 2 seconds... 00:31:39.625 17882.00 IOPS, 69.85 MiB/s 17863.50 IOPS, 69.78 MiB/s 00:31:39.625 Latency(us) 00:31:39.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:39.625 nvme0n1 : 2.01 17878.02 69.84 0.00 0.00 7150.35 3883.61 18738.44 00:31:39.625 =================================================================================================================== 00:31:39.625 Total : 17878.02 69.84 0.00 0.00 7150.35 3883.61 18738.44 00:31:39.625 { 00:31:39.625 "results": [ 00:31:39.625 { 00:31:39.625 "job": "nvme0n1", 00:31:39.625 "core_mask": "0x2", 00:31:39.625 "workload": "randread", 00:31:39.625 "status": "finished", 00:31:39.625 "queue_depth": 128, 00:31:39.625 "io_size": 4096, 00:31:39.625 "runtime": 2.005535, 00:31:39.625 "iops": 17878.02257253052, 00:31:39.625 "mibps": 69.83602567394735, 00:31:39.625 "io_failed": 0, 00:31:39.625 "io_timeout": 0, 00:31:39.625 "avg_latency_us": 7150.354094278912, 00:31:39.625 "min_latency_us": 3883.614814814815, 00:31:39.625 "max_latency_us": 18738.44148148148 00:31:39.625 } 00:31:39.625 ], 00:31:39.625 "core_count": 1 00:31:39.625 } 00:31:39.625 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:39.625 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:39.625 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:39.625 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:39.625 | select(.opcode=="crc32c") 00:31:39.625 | "\(.module_name) \(.executed)"' 00:31:39.625 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:40.191 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:40.191 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:40.191 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:40.191 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1656488 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1656488 ']' 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1656488 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656488 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656488' 00:31:40.192 killing process with pid 1656488 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1656488 00:31:40.192 Received shutdown signal, test time was about 2.000000 seconds 00:31:40.192 00:31:40.192 Latency(us) 00:31:40.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.192 =================================================================================================================== 00:31:40.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.192 09:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1656488 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1657020 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:40.450 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1657020 /var/tmp/bperf.sock 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1657020 ']' 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:40.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.708 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.708 [2024-10-07 09:52:35.316343] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:40.708 [2024-10-07 09:52:35.316442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657020 ] 00:31:40.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:40.708 Zero copy mechanism will not be used. 00:31:40.708 [2024-10-07 09:52:35.385224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.708 [2024-10-07 09:52:35.504924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.967 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:40.967 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:31:40.967 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:40.967 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:40.967 09:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:41.533 09:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.533 09:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.099 nvme0n1 00:31:42.099 09:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:42.099 09:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:42.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:42.357 Zero copy mechanism will not be used. 00:31:42.357 Running I/O for 2 seconds... 00:31:44.225 4472.00 IOPS, 559.00 MiB/s 4319.00 IOPS, 539.88 MiB/s 00:31:44.225 Latency(us) 00:31:44.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.226 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:44.226 nvme0n1 : 2.00 4318.51 539.81 0.00 0.00 3700.78 849.54 11942.12 00:31:44.226 =================================================================================================================== 00:31:44.226 Total : 4318.51 539.81 0.00 0.00 3700.78 849.54 11942.12 00:31:44.226 { 00:31:44.226 "results": [ 00:31:44.226 { 00:31:44.226 "job": "nvme0n1", 00:31:44.226 "core_mask": "0x2", 00:31:44.226 "workload": "randread", 00:31:44.226 "status": "finished", 00:31:44.226 "queue_depth": 16, 00:31:44.226 "io_size": 131072, 00:31:44.226 "runtime": 2.003931, 00:31:44.226 "iops": 4318.511964733317, 00:31:44.226 "mibps": 539.8139955916646, 00:31:44.226 "io_failed": 0, 00:31:44.226 "io_timeout": 0, 00:31:44.226 "avg_latency_us": 3700.7818757329087, 00:31:44.226 "min_latency_us": 849.5407407407407, 00:31:44.226 "max_latency_us": 11942.115555555556 00:31:44.226 } 00:31:44.226 ], 00:31:44.226 "core_count": 1 00:31:44.226 } 00:31:44.226 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:44.226 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:44.226 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:44.226 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:44.226 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:44.226 | select(.opcode=="crc32c") 00:31:44.226 | "\(.module_name) \(.executed)"' 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1657020 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1657020 ']' 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1657020 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657020 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657020' 00:31:44.792 killing process with pid 1657020 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1657020 00:31:44.792 Received shutdown signal, test time was about 2.000000 seconds 00:31:44.792 00:31:44.792 Latency(us) 00:31:44.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.792 =================================================================================================================== 00:31:44.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:44.792 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1657020 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1657551 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1657551 /var/tmp/bperf.sock 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1657551 ']' 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:45.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:45.050 09:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:45.050 [2024-10-07 09:52:39.709854] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:45.050 [2024-10-07 09:52:39.709975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657551 ] 00:31:45.050 [2024-10-07 09:52:39.779778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.308 [2024-10-07 09:52:39.901572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.565 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:45.565 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:31:45.565 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:45.565 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:45.565 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:46.154 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.154 09:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.446 nvme0n1 00:31:46.446 09:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:46.446 09:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:46.446 Running I/O for 2 seconds... 00:31:48.753 20053.00 IOPS, 78.33 MiB/s 20037.00 IOPS, 78.27 MiB/s 00:31:48.753 Latency(us) 00:31:48.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.753 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.753 nvme0n1 : 2.00 20061.25 78.36 0.00 0.00 6371.98 2949.12 13010.11 00:31:48.753 =================================================================================================================== 00:31:48.753 Total : 20061.25 78.36 0.00 0.00 6371.98 2949.12 13010.11 00:31:48.753 { 00:31:48.753 "results": [ 00:31:48.753 { 00:31:48.753 "job": "nvme0n1", 00:31:48.753 "core_mask": "0x2", 00:31:48.753 "workload": "randwrite", 00:31:48.753 "status": "finished", 00:31:48.753 "queue_depth": 128, 00:31:48.753 "io_size": 4096, 00:31:48.753 "runtime": 2.003963, 00:31:48.753 "iops": 20061.248635828106, 00:31:48.753 "mibps": 78.36425248370354, 00:31:48.753 "io_failed": 0, 00:31:48.753 "io_timeout": 0, 00:31:48.753 "avg_latency_us": 6371.982116091516, 00:31:48.753 "min_latency_us": 2949.12, 00:31:48.753 "max_latency_us": 13010.10962962963 00:31:48.753 } 00:31:48.753 ], 00:31:48.753 "core_count": 1 00:31:48.753 } 00:31:48.753 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:48.753 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:48.753 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:48.753 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:48.753 | select(.opcode=="crc32c") 00:31:48.753 | "\(.module_name) \(.executed)"' 00:31:48.753 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1657551 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1657551 ']' 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1657551 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657551 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657551' 00:31:49.318 killing process with pid 1657551 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1657551 00:31:49.318 Received shutdown signal, test time was about 2.000000 seconds 00:31:49.318 00:31:49.318 Latency(us) 00:31:49.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.318 =================================================================================================================== 00:31:49.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:49.318 09:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1657551 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1658036 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1658036 /var/tmp/bperf.sock 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1658036 ']' 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:49.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.576 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:49.576 [2024-10-07 09:52:44.272217] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:49.576 [2024-10-07 09:52:44.272316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658036 ] 00:31:49.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:49.576 Zero copy mechanism will not be used. 00:31:49.576 [2024-10-07 09:52:44.341234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.836 [2024-10-07 09:52:44.467247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.836 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.836 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:31:49.836 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:49.836 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:49.836 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:50.403 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.403 09:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.968 nvme0n1 00:31:50.968 09:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:50.969 09:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.969 Zero copy mechanism will not be used. 00:31:50.969 Running I/O for 2 seconds... 00:31:53.276 4381.00 IOPS, 547.62 MiB/s 4479.50 IOPS, 559.94 MiB/s 00:31:53.276 Latency(us) 00:31:53.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.276 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:53.276 nvme0n1 : 2.01 4476.99 559.62 0.00 0.00 3565.93 2669.99 6893.42 00:31:53.276 =================================================================================================================== 00:31:53.276 Total : 4476.99 559.62 0.00 0.00 3565.93 2669.99 6893.42 00:31:53.276 { 00:31:53.276 "results": [ 00:31:53.276 { 00:31:53.276 "job": "nvme0n1", 00:31:53.276 "core_mask": "0x2", 00:31:53.276 "workload": "randwrite", 00:31:53.276 "status": "finished", 00:31:53.276 "queue_depth": 16, 00:31:53.276 "io_size": 131072, 00:31:53.276 "runtime": 2.00514, 00:31:53.276 "iops": 4476.994125098497, 00:31:53.276 "mibps": 559.6242656373121, 00:31:53.276 "io_failed": 0, 00:31:53.276 "io_timeout": 0, 00:31:53.276 "avg_latency_us": 3565.9289743748423, 00:31:53.276 "min_latency_us": 2669.9851851851854, 00:31:53.276 "max_latency_us": 6893.416296296296 00:31:53.276 } 00:31:53.276 ], 00:31:53.276 "core_count": 1 00:31:53.276 } 00:31:53.276 09:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:53.276 09:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:53.276 09:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:53.276 09:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:53.276 09:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:53.276 | select(.opcode=="crc32c") 00:31:53.276 | "\(.module_name) \(.executed)"' 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1658036 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1658036 ']' 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1658036 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:53.534 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658036 00:31:53.790 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:53.790 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:53.790 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658036' 00:31:53.790 killing process with pid 1658036 00:31:53.790 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1658036 00:31:53.790 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.790 00:31:53.790 Latency(us) 00:31:53.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.790 =================================================================================================================== 00:31:53.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.790 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1658036 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1656463 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1656463 ']' 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1656463 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656463 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656463' 00:31:54.048 killing process with pid 1656463 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1656463 00:31:54.048 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1656463 00:31:54.306 00:31:54.306 real 0m18.838s 00:31:54.306 user 0m39.347s 00:31:54.306 sys 0m5.300s 00:31:54.306 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:54.306 09:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:54.306 ************************************ 00:31:54.306 END TEST nvmf_digest_clean 00:31:54.306 ************************************ 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.306 ************************************ 00:31:54.306 START TEST nvmf_digest_error 00:31:54.306 ************************************ 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1658630 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1658630 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1658630 ']' 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:54.306 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.306 [2024-10-07 09:52:49.116510] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:54.306 [2024-10-07 09:52:49.116592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.565 [2024-10-07 09:52:49.186298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.565 [2024-10-07 09:52:49.311185] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.565 [2024-10-07 09:52:49.311258] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.565 [2024-10-07 09:52:49.311275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.565 [2024-10-07 09:52:49.311288] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.565 [2024-10-07 09:52:49.311299] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.565 [2024-10-07 09:52:49.312120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.565 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:54.565 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:31:54.565 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:54.565 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:54.565 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.824 [2024-10-07 09:52:49.408809] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.824 null0 00:31:54.824 [2024-10-07 09:52:49.536805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.824 [2024-10-07 09:52:49.561022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1658674 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1658674 /var/tmp/bperf.sock 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1658674 ']' 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:54.824 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.824 [2024-10-07 09:52:49.622834] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:54.824 [2024-10-07 09:52:49.622940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658674 ] 00:31:55.082 [2024-10-07 09:52:49.697111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.082 [2024-10-07 09:52:49.824172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.340 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.340 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:31:55.340 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.340 09:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.598 09:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:56.532 nvme0n1 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:56.532 09:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:56.791 Running I/O for 2 seconds... 00:31:56.791 [2024-10-07 09:52:51.492796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.492853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.510709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.510746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.510766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.524775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.524821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.524841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.539825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.539860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.551972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.552001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.552034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.566970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.567037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.580996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.581025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.581056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.593570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.593605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.593625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.791 [2024-10-07 09:52:51.607069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:56.791 [2024-10-07 09:52:51.607099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.791 [2024-10-07 09:52:51.607134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.622688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.622724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.622744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.636232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.636266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.636286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.652952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.652981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.671548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.671584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.671604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.687958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.687987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.688020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.701848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.701882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.701912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.713577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.713612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.713631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.730548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.730583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.730603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.748041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.748070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.748101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.764994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.765024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.765056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.781723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.781758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.781777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.794447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.794482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.813132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.813161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.813193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.831038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.831067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.831104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.843155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.050 [2024-10-07 09:52:51.843210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.050 [2024-10-07 09:52:51.843232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.050 [2024-10-07 09:52:51.860329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.051 [2024-10-07 09:52:51.860363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.051 [2024-10-07 09:52:51.860382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.876453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.876488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.876507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.895216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.895251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.895270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.912745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.912779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.912799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.926992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.927022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.927053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.943586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.943621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.955969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.955997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.956029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.972058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.972092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:51.988069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:51.988098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:51.988129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.001149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.001178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.001209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.018288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.018324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.018344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.036612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.036648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.036667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.048615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.048650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.048669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.065512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.065547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.065567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.082096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.082126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.082159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.094059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.094091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.094125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.309 [2024-10-07 09:52:52.109187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.309 [2024-10-07 09:52:52.109222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.309 [2024-10-07 09:52:52.109242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.127029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.127061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.140710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.140744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.140764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.153691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.153726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.168841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.168875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.184069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.184098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.184131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.197782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.197816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.197834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.215253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.215288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.215307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.227273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.227307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.227338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.244644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.244679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.244699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.257363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.257398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.257417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.271875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.271918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.271951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.289646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.289681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.289700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.303024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.303053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.303085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.315761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.315796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.315815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.330779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.330814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.347123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.347152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.347184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.359128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.359160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.359192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.568 [2024-10-07 09:52:52.376489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.568 [2024-10-07 09:52:52.376524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.568 [2024-10-07 09:52:52.376544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.393522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.393564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.393584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.412915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.412959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.412975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.424785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.424819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.424841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.441137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.441166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.457875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.457938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.457956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 16449.00 IOPS, 64.25 MiB/s [2024-10-07 09:52:52.471870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.471915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.471950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.489087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.489118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.489158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.506234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.506285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.506312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.520699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.520736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.520756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.534362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.534406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.534425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.546962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.546991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.547023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.564659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.564694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.581021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.581049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.581081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.597237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.597272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.597291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.614563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.614608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.614627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.627077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.627110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.827 [2024-10-07 09:52:52.640420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:57.827 [2024-10-07 09:52:52.640455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.827 [2024-10-07 09:52:52.640474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.659134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.659162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.659177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.677198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.677245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.677264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.690486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.690521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.690540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.703191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.703237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.703256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.721157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.721186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.721217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.737616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.737650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.737669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.750217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.750251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.750269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.765611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.765646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.765665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.778850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.778915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.793255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.793290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.793309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.806390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.806423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.806442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.819112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.819140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.819170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.833744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.833777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.833797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.848304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.848338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.848357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.860535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.860569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.860587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.086 [2024-10-07 09:52:52.876497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.086 [2024-10-07 09:52:52.876530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-10-07 09:52:52.876556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.087 [2024-10-07 09:52:52.895410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.087 [2024-10-07 09:52:52.895445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.087 [2024-10-07 09:52:52.895464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.911550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.927779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.927813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.927833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.940295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.940330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.940349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.958391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.958425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.958444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.970436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.970471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.970490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:52.985970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:52.985998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:52.986030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.345 [2024-10-07 09:52:53.000961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.345 [2024-10-07 09:52:53.000989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.345 [2024-10-07 09:52:53.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.014841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.014901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.032039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.032068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.043863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.043908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.043930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.059429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.059465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.059484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.071480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.071525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.071545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.087215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.087243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.087258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.101903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.101951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.101967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.114512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.114548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.114567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.128052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.128082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.142909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.142954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.142970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.346 [2024-10-07 09:52:53.157678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.346 [2024-10-07 09:52:53.157712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.346 [2024-10-07 09:52:53.157731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.169709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.169743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.169762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.185984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.186012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.186044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.202824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.202858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.202878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.215343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.215397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.229405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.229439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.229457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.247037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.247066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.247097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.260450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.260512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.275295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.275329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.275347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.292950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.292981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.293013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.305160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.305210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.305230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.319361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.319396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.319416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.335350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.335386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.335406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.348495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.348531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.348550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.363139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.604 [2024-10-07 09:52:53.363184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.604 [2024-10-07 09:52:53.363205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.604 [2024-10-07 09:52:53.378060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.605 [2024-10-07 09:52:53.378089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.605 [2024-10-07 09:52:53.378120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.605 [2024-10-07 09:52:53.394406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.605 [2024-10-07 09:52:53.394441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.605 [2024-10-07 09:52:53.394460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.605 [2024-10-07 09:52:53.412343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.605 [2024-10-07 09:52:53.412378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.605 [2024-10-07 09:52:53.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.863 [2024-10-07 09:52:53.429991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.863 [2024-10-07 09:52:53.430020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.863 [2024-10-07 09:52:53.430052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.863 [2024-10-07 09:52:53.441758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.863 [2024-10-07 09:52:53.441798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.863 [2024-10-07 09:52:53.441817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.863 [2024-10-07 09:52:53.456268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.863 [2024-10-07 09:52:53.456302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.863 [2024-10-07 09:52:53.456320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.863 16814.50 IOPS, 65.68 MiB/s [2024-10-07 09:52:53.471882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a13f10) 00:31:58.863 [2024-10-07 09:52:53.471951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.863 [2024-10-07 09:52:53.471968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.863 00:31:58.863 Latency(us) 00:31:58.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.863 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:58.863 nvme0n1 : 2.04 16510.76 64.50 0.00 0.00 7593.97 3786.52 50486.99 00:31:58.863 =================================================================================================================== 00:31:58.863 Total : 16510.76 64.50 0.00 0.00 7593.97 3786.52 50486.99 00:31:58.863 { 00:31:58.863 "results": [ 00:31:58.863 { 00:31:58.863 "job": "nvme0n1", 00:31:58.864 "core_mask": "0x2", 00:31:58.864 "workload": "randread", 00:31:58.864 "status": "finished", 00:31:58.864 "queue_depth": 128, 00:31:58.864 "io_size": 4096, 00:31:58.864 "runtime": 2.044545, 00:31:58.864 "iops": 16510.764008618055, 00:31:58.864 "mibps": 64.49517190866428, 00:31:58.864 "io_failed": 0, 00:31:58.864 "io_timeout": 0, 00:31:58.864 "avg_latency_us": 7593.966805984822, 00:31:58.864 "min_latency_us": 3786.5244444444443, 00:31:58.864 "max_latency_us": 50486.99259259259 00:31:58.864 } 00:31:58.864 ], 00:31:58.864 "core_count": 1 00:31:58.864 } 00:31:58.864 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:58.864 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:58.864 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:58.864 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:58.864 | .driver_specific 00:31:58.864 | .nvme_error 00:31:58.864 | .status_code 00:31:58.864 | .command_transient_transport_error' 00:31:59.430 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:31:59.430 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1658674 00:31:59.430 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1658674 ']' 00:31:59.430 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1658674 00:31:59.430 09:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658674 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658674' 00:31:59.430 killing process with pid 1658674 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1658674 00:31:59.430 Received shutdown signal, test time was about 2.000000 seconds 00:31:59.430 00:31:59.430 Latency(us) 00:31:59.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.430 =================================================================================================================== 00:31:59.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.430 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1658674 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1659212 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1659212 /var/tmp/bperf.sock 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1659212 ']' 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:59.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.688 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:59.688 [2024-10-07 09:52:54.401215] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:31:59.688 [2024-10-07 09:52:54.401322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659212 ] 00:31:59.688 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:59.688 Zero copy mechanism will not be used. 00:31:59.688 [2024-10-07 09:52:54.478993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.946 [2024-10-07 09:52:54.604908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.946 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.946 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:31:59.946 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.946 09:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:00.881 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.139 nvme0n1 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:01.139 09:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.139 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:01.139 Zero copy mechanism will not be used. 00:32:01.139 Running I/O for 2 seconds... 00:32:01.139 [2024-10-07 09:52:55.935011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.139 [2024-10-07 09:52:55.935068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.139 [2024-10-07 09:52:55.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.139 [2024-10-07 09:52:55.941231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.139 [2024-10-07 09:52:55.941292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.139 [2024-10-07 09:52:55.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.139 [2024-10-07 09:52:55.947522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.139 [2024-10-07 09:52:55.947557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.139 [2024-10-07 09:52:55.947576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.139 [2024-10-07 09:52:55.953830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.139 [2024-10-07 09:52:55.953864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.139 [2024-10-07 09:52:55.953883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.398 [2024-10-07 09:52:55.960023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.398 [2024-10-07 09:52:55.960051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.398 [2024-10-07 09:52:55.960082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.398 [2024-10-07 09:52:55.965986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.398 [2024-10-07 09:52:55.966013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.398 [2024-10-07 09:52:55.966044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.398 [2024-10-07 09:52:55.971863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.398 [2024-10-07 09:52:55.971905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.398 [2024-10-07 09:52:55.971927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.398 [2024-10-07 09:52:55.977773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.398 [2024-10-07 09:52:55.977806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.398 [2024-10-07 09:52:55.977825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.398 [2024-10-07 09:52:55.984015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:55.984043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:55.984073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:55.990006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:55.990034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:55.990064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:55.996464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:55.996498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:55.996516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.003445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.003479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.003498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.010049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.010077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.010108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.016311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.016345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.022320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.022353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.022372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.028342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.028376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.034324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.034359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.034378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.040301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.040336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.040355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.046940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.046974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.047012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.052276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.052310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.052328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.058106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.058134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.058165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.064029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.064058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.064088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.069861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.069903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.069937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.075720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.075753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.075771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.081781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.081814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.087551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.087584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.087602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.094020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.094048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.094078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.101109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.101142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.101174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.108389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.108423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.108442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.116090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.116118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.123062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.123090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.123121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.130739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.130774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.130793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.138538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.138573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.138591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.146345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.146391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.146410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.154162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.399 [2024-10-07 09:52:56.154191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.399 [2024-10-07 09:52:56.154222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.399 [2024-10-07 09:52:56.162096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.162126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.162159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.169977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.170010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.170028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.178057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.178089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.178106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.186981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.187012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.187030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.194277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.194306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.194337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.202261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.202314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.400 [2024-10-07 09:52:56.210225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.400 [2024-10-07 09:52:56.210273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.400 [2024-10-07 09:52:56.210303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.218119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.218149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.226098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.226127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.226158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.234773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.234818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.234838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.243361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.243396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.243415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.251852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.251887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.251918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.261239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.261284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.261304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.269104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.269143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.269175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.278005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.278039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.278071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.287018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.287047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.287079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.294942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.294986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.295017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.301546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.301575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.301606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.310078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.310109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.310142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.316217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.316247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.316263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.322402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.322429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.322461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.327834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.327861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.327898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.333248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.333276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.333291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.338629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.338687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.344088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.344115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.344146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.349828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.349855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.349886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.355945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.659 [2024-10-07 09:52:56.355979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.659 [2024-10-07 09:52:56.356020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.659 [2024-10-07 09:52:56.362007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.362036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.362068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.367654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.367692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.367724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.373591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.373618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.373649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.379053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.379112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.384512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.384539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.389918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.389946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.389978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.395485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.395512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.395542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.400974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.401002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.401033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.406378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.406411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.411867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.411922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.417523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.417550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.417580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.423884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.423919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.423950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.430279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.430338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.437385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.437412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.437442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.443650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.443701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.452054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.452098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.452116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.458292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.458331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.458368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.465409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.465438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.465469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.660 [2024-10-07 09:52:56.473974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.660 [2024-10-07 09:52:56.474006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.660 [2024-10-07 09:52:56.474024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.482099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.482129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.482160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.489670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.489699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.489730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.498261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.498291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.498322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.505663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.505691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.505722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.512718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.512747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.512778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.518821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.518849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.525262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.525299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.525331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.531675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.531704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.538244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.538280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.538311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.544733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.544762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.544793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.550837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.550904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.556514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.556542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.556573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.562149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.562177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.562193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.568316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.568345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.568376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.573677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.573705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.573736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.579251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.579278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.579309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.584931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.584960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.920 [2024-10-07 09:52:56.584991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.920 [2024-10-07 09:52:56.590433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.920 [2024-10-07 09:52:56.590459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.590490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.595958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.595986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.596018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.601426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.601453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.601484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.607080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.607108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.607139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.613302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.613330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.613362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.619004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.619032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.619065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.624545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.624571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.624620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.630246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.630272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.630303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.635691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.635717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.641115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.641142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.641173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.646600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.646627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.646659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.652114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.652142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.652174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.657681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.657709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.657739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.663029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.663057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.663088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.668563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.668619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.674316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.674349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.679968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.679997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.680028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.685718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.685745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.685775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.691430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.691456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.691487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.697137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.697165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.697196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.702657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.702685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.702715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.708093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.708121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.708152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.714275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.714302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.714333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.722016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.722045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.722076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.921 [2024-10-07 09:52:56.729187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:01.921 [2024-10-07 09:52:56.729216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.921 [2024-10-07 09:52:56.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.180 [2024-10-07 09:52:56.735840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.180 [2024-10-07 09:52:56.735887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.180 [2024-10-07 09:52:56.735913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.180 [2024-10-07 09:52:56.741848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.741915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.748289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.748331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.748347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.756065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.756094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.756126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.763948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.771634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.771662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.771693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.779327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.779355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.779386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.786243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.786311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.792230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.792258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.798472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.798500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.805357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.805399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.805416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.813081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.813112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.813144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.819255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.819284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.819316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.825612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.825641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.825673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.831923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.831952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.831983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.838103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.838131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.838162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.844379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.844438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.849999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.850042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.850059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.855466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.855493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.855524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.860994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.861022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.181 [2024-10-07 09:52:56.861055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.181 [2024-10-07 09:52:56.866784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.181 [2024-10-07 09:52:56.866812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.866842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.872374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.872401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.872432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.877831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.877858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.883292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.883350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.888645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.888673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.888711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.893958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.894018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.899579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.899610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.899641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.905057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.905086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.905118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.910638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.910664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.910694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.916223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.916251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.916283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.921824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.921851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.927491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.927517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.927548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.182 4820.00 IOPS, 602.50 MiB/s [2024-10-07 09:52:56.935707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.935735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.935766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.941791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.941826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.941858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.947631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.947662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.947696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.953448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.953476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.953508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.959216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.959244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.959276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.964751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.964778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.964810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.970292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.970320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.970351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.182 [2024-10-07 09:52:56.976639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.182 [2024-10-07 09:52:56.976666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.182 [2024-10-07 09:52:56.976697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.183 [2024-10-07 09:52:56.982200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.183 [2024-10-07 09:52:56.982227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.183 [2024-10-07 09:52:56.982257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.183 [2024-10-07 09:52:56.987671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.183 [2024-10-07 09:52:56.987699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.183 [2024-10-07 09:52:56.987730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.183 [2024-10-07 09:52:56.993342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.183 [2024-10-07 09:52:56.993369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.183 [2024-10-07 09:52:56.993400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:56.999438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:56.999465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:56.999496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.005698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.005725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.005756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.011299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.011331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.011362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.016981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.017009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.017039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.022684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.022710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.022740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.028313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.028340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.028370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.033836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.033900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.039464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.039490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.039530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.045033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.045068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.045098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.050573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.050600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.050630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.055978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.056006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.056038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.061580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.061607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.061637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.067151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.067179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.067194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.070529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.070556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.070587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.077118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.077147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.077179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.083548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.083608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.090440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.090485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.090517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.097283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.097312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.097343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.442 [2024-10-07 09:52:57.104675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.442 [2024-10-07 09:52:57.104703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.442 [2024-10-07 09:52:57.104735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.111508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.111568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.118298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.118327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.118359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.125342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.125371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.125402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.132225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.132254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.132285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.139180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.139224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.139240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.145874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.145925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.145943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.152913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.152942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.152974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.160150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.160180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.160217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.166129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.166160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.166191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.172821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.172849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.172880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.179906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.179937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.179970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.187065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.187097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.187130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.193706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.193734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.193766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.200812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.200874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.208287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.208325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.208357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.214905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.214935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.214968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.221802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.221832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.221863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.228847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.228901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.228921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.236430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.236504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.244245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.244276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.244309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.443 [2024-10-07 09:52:57.251556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.443 [2024-10-07 09:52:57.251585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.443 [2024-10-07 09:52:57.251616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.444 [2024-10-07 09:52:57.256721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.444 [2024-10-07 09:52:57.256754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.444 [2024-10-07 09:52:57.256782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.264321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.264351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.264383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.271914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.271944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.271976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.278154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.278183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.278214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.284017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.284077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.289956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.289984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.296084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.296113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.296144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.302227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.302254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.302285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.308766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.308793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.308824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.703 [2024-10-07 09:52:57.314887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.703 [2024-10-07 09:52:57.314943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.703 [2024-10-07 09:52:57.314959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.321293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.321329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.321356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.327481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.327513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.327532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.333944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.333971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.334000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.340266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.340299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.340318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.346594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.346628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.346647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.352606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.352639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.352657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.358990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.359017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.359048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.365297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.365330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.365349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.371221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.371255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.377202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.377242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.377262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.383622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.383655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.383673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.389610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.389644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.389662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.395699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.395732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.395750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.401659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.401693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.401711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.407589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.407621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.407639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.413950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.413978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.414009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.419882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.419939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.419955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.426025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.426053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.426084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.432023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.432051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.432082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.438078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.438105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.438137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.704 [2024-10-07 09:52:57.444011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.704 [2024-10-07 09:52:57.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.704 [2024-10-07 09:52:57.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.450006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.450035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.450066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.455789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.455822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.455840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.462037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.462065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.462095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.468030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.468057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.474001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.474027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.474058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.479983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.480009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.480056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.486132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.486159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.486189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.492183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.492210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.498102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.498128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.504075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.504102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.504133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.510608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.510641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.510660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.705 [2024-10-07 09:52:57.518207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.705 [2024-10-07 09:52:57.518255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.705 [2024-10-07 09:52:57.518275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.525677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.525710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.525729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.532392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.532426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.532446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.538686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.538719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.538739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.545180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.545227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.545246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.552969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.552997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.553028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.561642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.561676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.569596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.569630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.569649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.576067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.576095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.582601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.582635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.582656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.589435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.589468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.589487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.594949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.594976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.595013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.600974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.601002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.601035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.607239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.607266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.607299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.613581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.965 [2024-10-07 09:52:57.613613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.965 [2024-10-07 09:52:57.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.965 [2024-10-07 09:52:57.620190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.620217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.626168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.626211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.626230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.632816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.632850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.632870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.638792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.638825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.638844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.642292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.642324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.642343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.648873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.648936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.648953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.655162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.655211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.655230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.662580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.662614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.662632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.670320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.670355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.670374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.677137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.677165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.677195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.684252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.684298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.684317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.691132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.691160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.691176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.696900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.696945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.702964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.702993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.703023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.709030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.709057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.715460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.715493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.715511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.721903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.721949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.721964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.728746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.728800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.736076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.736104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.736135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.743297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.743331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.743350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.751103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.751146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.751182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.758773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.758808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.758836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.766040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.766068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.766106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.772285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.772319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.772338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.966 [2024-10-07 09:52:57.778324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:02.966 [2024-10-07 09:52:57.778357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.966 [2024-10-07 09:52:57.778376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.784868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.784909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.225 [2024-10-07 09:52:57.784944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.792250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.792285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.225 [2024-10-07 09:52:57.792304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.798961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.798989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.225 [2024-10-07 09:52:57.799021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.805959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.225 [2024-10-07 09:52:57.806002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.812744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.812778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.225 [2024-10-07 09:52:57.812796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.225 [2024-10-07 09:52:57.819326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.225 [2024-10-07 09:52:57.819360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.819379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.825962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.825995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.826027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.832090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.832117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.832147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.838099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.838127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.838156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.844139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.844166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.844197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.850083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.850109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.850138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.856174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.856201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.856234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.862139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.862165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.862196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.868136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.868163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.868193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.874422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.874456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.874475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.880909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.880953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.880968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.887065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.887093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.887129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.893137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.893164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.893194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.899223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.899256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.899282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.905318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.905362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.905380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.911410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.911459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.917400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.917433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.917451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.923387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.923420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.923438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.226 [2024-10-07 09:52:57.929687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b6f80) 00:32:03.226 [2024-10-07 09:52:57.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.226 [2024-10-07 09:52:57.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.226 4839.00 IOPS, 604.88 MiB/s 00:32:03.226 Latency(us) 00:32:03.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.226 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:03.226 nvme0n1 : 2.00 4840.01 605.00 0.00 0.00 3301.62 898.09 9077.95 00:32:03.226 =================================================================================================================== 00:32:03.226 Total : 4840.01 605.00 0.00 0.00 3301.62 898.09 9077.95 00:32:03.226 { 00:32:03.226 "results": [ 00:32:03.226 { 00:32:03.226 "job": "nvme0n1", 00:32:03.226 "core_mask": "0x2", 00:32:03.226 "workload": "randread", 00:32:03.226 "status": "finished", 00:32:03.226 "queue_depth": 16, 00:32:03.226 "io_size": 131072, 00:32:03.226 "runtime": 2.002889, 00:32:03.226 "iops": 4840.00860756637, 00:32:03.226 "mibps": 605.0010759457963, 00:32:03.226 "io_failed": 0, 00:32:03.226 "io_timeout": 0, 00:32:03.226 "avg_latency_us": 3301.6162470867816, 00:32:03.226 "min_latency_us": 898.085925925926, 00:32:03.226 "max_latency_us": 9077.94962962963 00:32:03.226 } 00:32:03.226 ], 00:32:03.226 "core_count": 1 00:32:03.226 } 00:32:03.226 09:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:03.226 09:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:03.226 09:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:03.226 09:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:03.226 | .driver_specific 00:32:03.226 | .nvme_error 00:32:03.226 | .status_code 00:32:03.226 | .command_transient_transport_error' 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 312 > 0 )) 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1659212 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1659212 ']' 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1659212 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659212 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659212' 00:32:03.791 killing process with pid 1659212 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1659212 00:32:03.791 Received shutdown signal, test time was about 2.000000 seconds 00:32:03.791 00:32:03.791 Latency(us) 00:32:03.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.791 =================================================================================================================== 00:32:03.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.791 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1659212 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1659744 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1659744 /var/tmp/bperf.sock 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1659744 ']' 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.049 09:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.049 [2024-10-07 09:52:58.864101] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:04.049 [2024-10-07 09:52:58.864192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659744 ] 00:32:04.308 [2024-10-07 09:52:58.930669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.308 [2024-10-07 09:52:59.052252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.874 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:04.874 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:04.874 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.874 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.132 09:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.698 nvme0n1 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:05.698 09:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:05.957 Running I/O for 2 seconds... 00:32:05.957 [2024-10-07 09:53:00.724903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ee5c8 00:32:05.957 [2024-10-07 09:53:00.725995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.957 [2024-10-07 09:53:00.726031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:05.957 [2024-10-07 09:53:00.738959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fbcf0 00:32:05.957 [2024-10-07 09:53:00.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.957 [2024-10-07 09:53:00.740198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:05.957 [2024-10-07 09:53:00.751447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3060 00:32:05.957 [2024-10-07 09:53:00.752661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.957 [2024-10-07 09:53:00.752695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:05.957 [2024-10-07 09:53:00.765173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e12d8 00:32:05.957 [2024-10-07 09:53:00.766564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.957 [2024-10-07 09:53:00.766600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.779099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e7c50 00:32:06.215 [2024-10-07 09:53:00.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.780774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.792804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ddc00 00:32:06.215 [2024-10-07 09:53:00.794527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.794561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.806391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f0bc0 00:32:06.215 [2024-10-07 09:53:00.808287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.808320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.819961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e2c28 00:32:06.215 [2024-10-07 09:53:00.822002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.822030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.829159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df988 00:32:06.215 [2024-10-07 09:53:00.830025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.830054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.842782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f1ca0 00:32:06.215 [2024-10-07 09:53:00.843797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.843831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.855036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fd208 00:32:06.215 [2024-10-07 09:53:00.856060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.856087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.868603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eee38 00:32:06.215 [2024-10-07 09:53:00.869801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.869834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.882202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3060 00:32:06.215 [2024-10-07 09:53:00.883588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.883621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.895721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e7818 00:32:06.215 [2024-10-07 09:53:00.897270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.215 [2024-10-07 09:53:00.897303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:06.215 [2024-10-07 09:53:00.909265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fd208 00:32:06.215 [2024-10-07 09:53:00.910976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.911003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.922900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f3e60 00:32:06.216 [2024-10-07 09:53:00.924770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.924804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.936442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e1710 00:32:06.216 [2024-10-07 09:53:00.938509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.945629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f2d80 00:32:06.216 [2024-10-07 09:53:00.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.957945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fe2e8 00:32:06.216 [2024-10-07 09:53:00.958804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.958835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.971594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e9e10 00:32:06.216 [2024-10-07 09:53:00.972648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.972680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.985126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e27f0 00:32:06.216 [2024-10-07 09:53:00.986367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:00.986400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:00.999187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198de470 00:32:06.216 [2024-10-07 09:53:01.000237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:01.000271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:01.011483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f0bc0 00:32:06.216 [2024-10-07 09:53:01.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:01.013310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.216 [2024-10-07 09:53:01.023458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198de470 00:32:06.216 [2024-10-07 09:53:01.024354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.216 [2024-10-07 09:53:01.024387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.037067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f3e60 00:32:06.474 [2024-10-07 09:53:01.038124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.038152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.050008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f4298 00:32:06.474 [2024-10-07 09:53:01.051229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.051261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.064459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198de8a8 00:32:06.474 [2024-10-07 09:53:01.065853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.065885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.079032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fe720 00:32:06.474 [2024-10-07 09:53:01.081051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.081077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.088190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f9b30 00:32:06.474 [2024-10-07 09:53:01.089014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.089041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.101831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6300 00:32:06.474 [2024-10-07 09:53:01.102992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.103018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.115025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e27f0 00:32:06.474 [2024-10-07 09:53:01.116156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.116196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.127660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e88f8 00:32:06.474 [2024-10-07 09:53:01.128818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.474 [2024-10-07 09:53:01.128850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.474 [2024-10-07 09:53:01.141172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e5a90 00:32:06.475 [2024-10-07 09:53:01.142494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.142525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.154739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f3a28 00:32:06.475 [2024-10-07 09:53:01.156251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.156289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.168648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198de8a8 00:32:06.475 [2024-10-07 09:53:01.170133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.170160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.181505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fac10 00:32:06.475 [2024-10-07 09:53:01.183131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.183157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.193650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fbcf0 00:32:06.475 [2024-10-07 09:53:01.194932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.194975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.206613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e1b48 00:32:06.475 [2024-10-07 09:53:01.207795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.207827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.220176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f1430 00:32:06.475 [2024-10-07 09:53:01.221523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.221554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.232476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ed920 00:32:06.475 [2024-10-07 09:53:01.233794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.233826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.245636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f46d0 00:32:06.475 [2024-10-07 09:53:01.247004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.247030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.258472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e5a90 00:32:06.475 [2024-10-07 09:53:01.259277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.259310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.272184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eaef0 00:32:06.475 [2024-10-07 09:53:01.273163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.273210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:06.475 [2024-10-07 09:53:01.284569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ee5c8 00:32:06.475 [2024-10-07 09:53:01.286382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-10-07 09:53:01.286413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.733 [2024-10-07 09:53:01.295997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6890 00:32:06.733 [2024-10-07 09:53:01.296816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.296847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.309520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6cc8 00:32:06.734 [2024-10-07 09:53:01.310494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.310525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.323051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e5a90 00:32:06.734 [2024-10-07 09:53:01.324233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.324264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.336571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f31b8 00:32:06.734 [2024-10-07 09:53:01.337897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.337941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.350163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3d08 00:32:06.734 [2024-10-07 09:53:01.351680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.351712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.360568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e9e10 00:32:06.734 [2024-10-07 09:53:01.361373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.361404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.374106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f2510 00:32:06.734 [2024-10-07 09:53:01.375134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.375160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.387338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6890 00:32:06.734 [2024-10-07 09:53:01.388320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.388352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.400646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f7970 00:32:06.734 [2024-10-07 09:53:01.401296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.401329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.417027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198feb58 00:32:06.734 [2024-10-07 09:53:01.419065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.419091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.426374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f5378 00:32:06.734 [2024-10-07 09:53:01.427362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.439960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e84c0 00:32:06.734 [2024-10-07 09:53:01.441139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.441180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.453142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fda78 00:32:06.734 [2024-10-07 09:53:01.454337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.454369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.465885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f5be8 00:32:06.734 [2024-10-07 09:53:01.467059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.467085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.479081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3060 00:32:06.734 [2024-10-07 09:53:01.480258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.480289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.491695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df118 00:32:06.734 [2024-10-07 09:53:01.492373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.492410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.507233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fc560 00:32:06.734 [2024-10-07 09:53:01.509105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.509132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.516519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e8d30 00:32:06.734 [2024-10-07 09:53:01.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.530076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f31b8 00:32:06.734 [2024-10-07 09:53:01.531101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.531128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.734 [2024-10-07 09:53:01.543244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f1ca0 00:32:06.734 [2024-10-07 09:53:01.544232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.734 [2024-10-07 09:53:01.544258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.556835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e23b8 00:32:06.993 [2024-10-07 09:53:01.557860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.557901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.569774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e27f0 00:32:06.993 [2024-10-07 09:53:01.570963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.570988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.583025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e4578 00:32:06.993 [2024-10-07 09:53:01.583696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.583727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.599415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f3e60 00:32:06.993 [2024-10-07 09:53:01.601450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.601483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.608569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198efae0 00:32:06.993 [2024-10-07 09:53:01.609460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.609491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.624961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ed0b0 00:32:06.993 [2024-10-07 09:53:01.626856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.626888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.636206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fe720 00:32:06.993 [2024-10-07 09:53:01.637410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.649409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ec840 00:32:06.993 [2024-10-07 09:53:01.650437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.650470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.661632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f4f40 00:32:06.993 [2024-10-07 09:53:01.663398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.663430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.672838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fc998 00:32:06.993 [2024-10-07 09:53:01.673710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.673741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.688655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e88f8 00:32:06.993 [2024-10-07 09:53:01.690066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.993 [2024-10-07 09:53:01.690092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:06.993 [2024-10-07 09:53:01.702147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f81e0 00:32:06.993 [2024-10-07 09:53:01.703715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.703748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.713085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e84c0 00:32:06.994 [2024-10-07 09:53:01.714946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.714975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:06.994 19578.00 IOPS, 76.48 MiB/s [2024-10-07 09:53:01.727617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fd640 00:32:06.994 [2024-10-07 09:53:01.729124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.729151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.741330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df118 00:32:06.994 [2024-10-07 09:53:01.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.743066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.754634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f92c0 00:32:06.994 [2024-10-07 09:53:01.756365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.756397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.767339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6738 00:32:06.994 [2024-10-07 09:53:01.769059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.769086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.781051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e0ea0 00:32:06.994 [2024-10-07 09:53:01.782968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.782996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.794826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f9f68 00:32:06.994 [2024-10-07 09:53:01.796873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.796912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:06.994 [2024-10-07 09:53:01.804018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e49b0 00:32:06.994 [2024-10-07 09:53:01.804860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.994 [2024-10-07 09:53:01.804899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.818976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6fa8 00:32:07.252 [2024-10-07 09:53:01.820523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.820554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.832104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3060 00:32:07.252 [2024-10-07 09:53:01.833121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.833153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.844812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f5be8 00:32:07.252 [2024-10-07 09:53:01.846150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.846177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.857827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fbcf0 00:32:07.252 [2024-10-07 09:53:01.859193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.859233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.871020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e99d8 00:32:07.252 [2024-10-07 09:53:01.872396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.872428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.886436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f4298 00:32:07.252 [2024-10-07 09:53:01.888492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.888524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.895593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f7100 00:32:07.252 [2024-10-07 09:53:01.896470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.252 [2024-10-07 09:53:01.896501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:07.252 [2024-10-07 09:53:01.909172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e1710 00:32:07.253 [2024-10-07 09:53:01.910212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.910256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.922740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f2948 00:32:07.253 [2024-10-07 09:53:01.924000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.924026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.935051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e12d8 00:32:07.253 [2024-10-07 09:53:01.936275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.936307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.948630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6738 00:32:07.253 [2024-10-07 09:53:01.950042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.950069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.962217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eaef0 00:32:07.253 [2024-10-07 09:53:01.963770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.963803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.975860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e38d0 00:32:07.253 [2024-10-07 09:53:01.977592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.977625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.989419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e2c28 00:32:07.253 [2024-10-07 09:53:01.991322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:01.998684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eaab8 00:32:07.253 [2024-10-07 09:53:01.999646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:01.999678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:02.011937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eaef0 00:32:07.253 [2024-10-07 09:53:02.012812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:02.012844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:02.028389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ee190 00:32:07.253 [2024-10-07 09:53:02.030310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:02.030338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:02.042246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fb8b8 00:32:07.253 [2024-10-07 09:53:02.044343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:02.044376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:02.051412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ef6a8 00:32:07.253 [2024-10-07 09:53:02.052305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:02.052338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:07.253 [2024-10-07 09:53:02.064559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f4b08 00:32:07.253 [2024-10-07 09:53:02.065612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.253 [2024-10-07 09:53:02.065644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.078437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f81e0 00:32:07.511 [2024-10-07 09:53:02.079645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.079678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.092221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6cc8 00:32:07.511 [2024-10-07 09:53:02.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.093639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.105959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e0630 00:32:07.511 [2024-10-07 09:53:02.107504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.107537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.116302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e0a68 00:32:07.511 [2024-10-07 09:53:02.117133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.129818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f31b8 00:32:07.511 [2024-10-07 09:53:02.130855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.130888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.143370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ed920 00:32:07.511 [2024-10-07 09:53:02.144548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.511 [2024-10-07 09:53:02.144580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:07.511 [2024-10-07 09:53:02.156916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e4578 00:32:07.512 [2024-10-07 09:53:02.158283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.158315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.170636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e4140 00:32:07.512 [2024-10-07 09:53:02.172183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.184133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f31b8 00:32:07.512 [2024-10-07 09:53:02.185848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.185880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.197709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6890 00:32:07.512 [2024-10-07 09:53:02.199597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.199629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.211389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e12d8 00:32:07.512 [2024-10-07 09:53:02.213456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.213488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.220674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df988 00:32:07.512 [2024-10-07 09:53:02.221721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.221752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.234242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f8a50 00:32:07.512 [2024-10-07 09:53:02.235467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.235500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.247300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f46d0 00:32:07.512 [2024-10-07 09:53:02.248028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.248055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.260861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eee38 00:32:07.512 [2024-10-07 09:53:02.261756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.261788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.276234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e9168 00:32:07.512 [2024-10-07 09:53:02.278058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.278101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.285411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df988 00:32:07.512 [2024-10-07 09:53:02.286502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.286539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.299314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ed920 00:32:07.512 [2024-10-07 09:53:02.300561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.300593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.313196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198de038 00:32:07.512 [2024-10-07 09:53:02.314622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.512 [2024-10-07 09:53:02.314655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:07.512 [2024-10-07 09:53:02.327119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e5a90 00:32:07.770 [2024-10-07 09:53:02.328708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.770 [2024-10-07 09:53:02.328740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:07.770 [2024-10-07 09:53:02.340395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198df988 00:32:07.770 [2024-10-07 09:53:02.341999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.770 [2024-10-07 09:53:02.342025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:07.770 [2024-10-07 09:53:02.353074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f8e88 00:32:07.770 [2024-10-07 09:53:02.354653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.770 [2024-10-07 09:53:02.354686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:07.770 [2024-10-07 09:53:02.366226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f2d80 00:32:07.770 [2024-10-07 09:53:02.367285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.770 [2024-10-07 09:53:02.367318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:07.770 [2024-10-07 09:53:02.378477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ed4e8 00:32:07.770 [2024-10-07 09:53:02.380227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.380258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.389615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fa3a0 00:32:07.771 [2024-10-07 09:53:02.390538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.390570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.405453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f4f40 00:32:07.771 [2024-10-07 09:53:02.406874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.406914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.417795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198feb58 00:32:07.771 [2024-10-07 09:53:02.419201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.419233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.431387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fe2e8 00:32:07.771 [2024-10-07 09:53:02.433006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.433032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.443525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e8088 00:32:07.771 [2024-10-07 09:53:02.444692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.444724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.456437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e2c28 00:32:07.771 [2024-10-07 09:53:02.457539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.457571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.471257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e84c0 00:32:07.771 [2024-10-07 09:53:02.473034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.473060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.484841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6b70 00:32:07.771 [2024-10-07 09:53:02.486763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.486795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.494232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ddc00 00:32:07.771 [2024-10-07 09:53:02.495152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.495178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.507803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f9f68 00:32:07.771 [2024-10-07 09:53:02.508888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.508940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.521410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6738 00:32:07.771 [2024-10-07 09:53:02.522675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.522706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.534447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198eea00 00:32:07.771 [2024-10-07 09:53:02.535168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.535194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.547980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e6fa8 00:32:07.771 [2024-10-07 09:53:02.548909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.548952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.561495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fe720 00:32:07.771 [2024-10-07 09:53:02.562577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.562609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.573358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f6cc8 00:32:07.771 [2024-10-07 09:53:02.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.771 [2024-10-07 09:53:02.574610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:07.771 [2024-10-07 09:53:02.586398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f2d80 00:32:08.030 [2024-10-07 09:53:02.587733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.587765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.600130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198f3a28 00:32:08.030 [2024-10-07 09:53:02.601602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.601634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.613321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198dfdc0 00:32:08.030 [2024-10-07 09:53:02.614772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.614803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.628775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e12d8 00:32:08.030 [2024-10-07 09:53:02.630949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.630981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.637997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ec408 00:32:08.030 [2024-10-07 09:53:02.638975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.639001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.651561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198fb048 00:32:08.030 [2024-10-07 09:53:02.652701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.652733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.665154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e2c28 00:32:08.030 [2024-10-07 09:53:02.666457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.666489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.678847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e3060 00:32:08.030 [2024-10-07 09:53:02.680318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.680351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.692410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e01f8 00:32:08.030 [2024-10-07 09:53:02.694047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.030 [2024-10-07 09:53:02.694073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.030 [2024-10-07 09:53:02.704688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198e12d8 00:32:08.030 [2024-10-07 09:53:02.706318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.031 [2024-10-07 09:53:02.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:08.031 [2024-10-07 09:53:02.716361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52d90) with pdu=0x2000198ef6a8 00:32:08.031 19619.00 IOPS, 76.64 MiB/s [2024-10-07 09:53:02.717128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:08.031 [2024-10-07 09:53:02.717154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.031 00:32:08.031 Latency(us) 00:32:08.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.031 nvme0n1 : 2.00 19602.43 76.57 0.00 0.00 6519.03 3228.25 17767.54 00:32:08.031 =================================================================================================================== 00:32:08.031 Total : 19602.43 76.57 0.00 0.00 6519.03 3228.25 17767.54 00:32:08.031 { 00:32:08.031 "results": [ 00:32:08.031 { 00:32:08.031 "job": "nvme0n1", 00:32:08.031 "core_mask": "0x2", 00:32:08.031 "workload": "randwrite", 00:32:08.031 "status": "finished", 00:32:08.031 "queue_depth": 128, 00:32:08.031 "io_size": 4096, 00:32:08.031 "runtime": 2.004088, 00:32:08.031 "iops": 19602.432627708964, 00:32:08.031 "mibps": 76.57200245198814, 00:32:08.031 "io_failed": 0, 00:32:08.031 "io_timeout": 0, 00:32:08.031 "avg_latency_us": 6519.032948924997, 00:32:08.031 "min_latency_us": 3228.254814814815, 00:32:08.031 "max_latency_us": 17767.53777777778 00:32:08.031 } 00:32:08.031 ], 00:32:08.031 "core_count": 1 00:32:08.031 } 00:32:08.031 09:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:08.031 09:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:08.031 | .driver_specific 00:32:08.031 | .nvme_error 00:32:08.031 | .status_code 00:32:08.031 | .command_transient_transport_error' 00:32:08.031 09:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:08.031 09:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1659744 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1659744 ']' 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1659744 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659744 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659744' 00:32:08.597 killing process with pid 1659744 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1659744 00:32:08.597 Received shutdown signal, test time was about 2.000000 seconds 00:32:08.597 00:32:08.597 Latency(us) 00:32:08.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.597 =================================================================================================================== 00:32:08.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:08.597 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1659744 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1660276 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1660276 /var/tmp/bperf.sock 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1660276 ']' 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.855 09:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.113 [2024-10-07 09:53:03.707833] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:09.113 [2024-10-07 09:53:03.707951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660276 ] 00:32:09.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:09.113 Zero copy mechanism will not be used. 00:32:09.113 [2024-10-07 09:53:03.780623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.113 [2024-10-07 09:53:03.896817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.371 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.371 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:09.371 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:09.371 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.628 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.192 nvme0n1 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:10.192 09:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.451 Zero copy mechanism will not be used. 00:32:10.451 Running I/O for 2 seconds... 00:32:10.451 [2024-10-07 09:53:05.035780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.036112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.036148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.042600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.042948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.049285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.049611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.049645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.055941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.056243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.056286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.062614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.063010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.063051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.069379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.451 [2024-10-07 09:53:05.069699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.451 [2024-10-07 09:53:05.069732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.451 [2024-10-07 09:53:05.076081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.076488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.076521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.083204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.083607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.083640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.090586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.090914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.090948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.097799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.105030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.105453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.105486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.111953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.112350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.112383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.118677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.119068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.119112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.125359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.125678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.125711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.131954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.132262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.132294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.138422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.138739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.138772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.145067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.145481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.145513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.151858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.152252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.152291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.159543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.159858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.159898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.166005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.166438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.172725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.173109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.179359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.179741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.179773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.186278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.186597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.186630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.193402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.193741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.193774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.200052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.200416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.208019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.208384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.208416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.215465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.215766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.215799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.222027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.222348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.222381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.228529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.228904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.228953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.235125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.235497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.235531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.241757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.242062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.242092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.248157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.248479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.248513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.254402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.254702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.452 [2024-10-07 09:53:05.254735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.452 [2024-10-07 09:53:05.261139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.452 [2024-10-07 09:53:05.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.453 [2024-10-07 09:53:05.261492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.267618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.267938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.274361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.274663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.274696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.281085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.281461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.281494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.288011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.288424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.295435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.295804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.302400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.302686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.302719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.308971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.309264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.309297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.315224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.315521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.315553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.321689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.321998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.322029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.328498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.328786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.328819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.335069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.335400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.335433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.341594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.341877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.348097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.348428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.348461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.354595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.354947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.354976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.360806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.361140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.361184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.367291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.367650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.367682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.374031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.374364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.380358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.380752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.712 [2024-10-07 09:53:05.386702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.712 [2024-10-07 09:53:05.386998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.712 [2024-10-07 09:53:05.387026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.392839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.393149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.393177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.399229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.399583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.399615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.405564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.405968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.411777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.412057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.412084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.417977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.418326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.418358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.424798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.425077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.425104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.431994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.432338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.432370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.438430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.438777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.438816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.445077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.445372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.445404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.451229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.451514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.451546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.457443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.457729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.457761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.463648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.463938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.463983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.470833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.471118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.478259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.478543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.478575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.486111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.486498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.494296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.494626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.494659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.501480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.501774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.501807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.508528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.508849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.515424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.515711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.515744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.713 [2024-10-07 09:53:05.522234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.713 [2024-10-07 09:53:05.522521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.713 [2024-10-07 09:53:05.522553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.528717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.528997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.529027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.535205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.535518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.535546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.542166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.542471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.548667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.549039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.549068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.554908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.555163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.561289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.561560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.561593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.567500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.567772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.567804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.574217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.574503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.580579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.580876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.580918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.587053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.587348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.587380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.593155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.593450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.593483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.599349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.599643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.599675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.605801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.606087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.606117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.611969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.612258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.612297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.618205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.618485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.618518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.624506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.624778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.624811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.630942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.631217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.631262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.637156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.637487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.973 [2024-10-07 09:53:05.637520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.973 [2024-10-07 09:53:05.643558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.973 [2024-10-07 09:53:05.643839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.643872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.649728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.650019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.650049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.656153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.656448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.656481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.662756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.663070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.669425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.669708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.669741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.675667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.675970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.676000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.681686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.682022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.682051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.687638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.687917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.687960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.693538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.693806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.693838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.699607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.699876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.699916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.705870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.706131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.711964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.712208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.712256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.718057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.718433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.718465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.724093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.724380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.724413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.730245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.736299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.736571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.736604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.742364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.742631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.742664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.748363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.748632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.748664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.754518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.754793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.754825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.760421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.760690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.760722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.766599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.766868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.974 [2024-10-07 09:53:05.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.974 [2024-10-07 09:53:05.773012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.974 [2024-10-07 09:53:05.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.975 [2024-10-07 09:53:05.773319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.975 [2024-10-07 09:53:05.779043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.975 [2024-10-07 09:53:05.779320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.975 [2024-10-07 09:53:05.779353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.975 [2024-10-07 09:53:05.785162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:10.975 [2024-10-07 09:53:05.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.975 [2024-10-07 09:53:05.785487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.791180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.791490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.791523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.797384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.797670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.803673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.803973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.804001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.809863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.810121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.816039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.816317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.816350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.822119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.822427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.822460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.828360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.828641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.834591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.834907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.840571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.840873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.846769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.847042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.847069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.852855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.853183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.853211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.858941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.859228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.865294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.865569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.865601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.871414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.871687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.871719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.877401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.877669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.877701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.234 [2024-10-07 09:53:05.883513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.234 [2024-10-07 09:53:05.883784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.234 [2024-10-07 09:53:05.883815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.889678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.889969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.889997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.895879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.896181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.896223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.901903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.902165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.902192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.907912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.908171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.908198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.913907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.914171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.914216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.919888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.920161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.920188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.925872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.926156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.926183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.932497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.932769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.932807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.938656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.938931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.938977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.945399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.945672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.945705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.951734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.952012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.959067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.959349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.966643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.967031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.967073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.974680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.975074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.975115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.982398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.982775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.982808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.990791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.991095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:05.998493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:05.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:05.998915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:06.006390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.006776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.006809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:06.014651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.014988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.015015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:06.022537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.022823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.022856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:06.029997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.031869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.031909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.235 4712.00 IOPS, 589.00 MiB/s [2024-10-07 09:53:06.040226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.040364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.040394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.235 [2024-10-07 09:53:06.048158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.235 [2024-10-07 09:53:06.048259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.235 [2024-10-07 09:53:06.048289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.494 [2024-10-07 09:53:06.055547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.494 [2024-10-07 09:53:06.055631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.494 [2024-10-07 09:53:06.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.062452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.062533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.062564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.069673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.069760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.069790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.076904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.076998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.077024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.083409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.083493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.083523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.090094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.090180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.090205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.097003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.097093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.097119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.103695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.103777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.103807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.110489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.110572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.110601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.117082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.117166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.117195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.123876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.124017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.124050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.130569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.130696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.130727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.138292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.138411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.147218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.147430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.154759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.154870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.154911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.161318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.161409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.161439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.167673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.167777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.167808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.174139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.174274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.174306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.180744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.180825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.180856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.187089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.187230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.187262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.193865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.194027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.194054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.201053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.201225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.201258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.208029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.208209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.208240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.214846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.214985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.215013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.222154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.222294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.222325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.228738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.228865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.228904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.235527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.235667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.235699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.495 [2024-10-07 09:53:06.242300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.495 [2024-10-07 09:53:06.242415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.495 [2024-10-07 09:53:06.242446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.249370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.249467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.249498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.255756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.255931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.262937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.263056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.269938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.270071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.270096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.276601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.276725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.276756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.284261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.284387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.284419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.291360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.291491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.291524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.299790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.299965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.299993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.496 [2024-10-07 09:53:06.307774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.496 [2024-10-07 09:53:06.307867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.496 [2024-10-07 09:53:06.307913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.314289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.314373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.314405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.320981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.321091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.321119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.327942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.328041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.328067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.335973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.336066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.336092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.342992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.343081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.343107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.349484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.349563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.349593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.355975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.356066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.356093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.362663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.362745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.362775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.369321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.369407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.369437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.375845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.375961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.375989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.382288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.382369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.382399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.389047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.389138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.389164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.395793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.395875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.395914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.402613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.402693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.402724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.409332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.409413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.409444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.415903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.416009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.755 [2024-10-07 09:53:06.416035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.755 [2024-10-07 09:53:06.422440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.755 [2024-10-07 09:53:06.422521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.422551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.429099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.429186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.435810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.435898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.435941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.442688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.442767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.442797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.450283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.450361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.450392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.457678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.457765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.457795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.464751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.464835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.471977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.472098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.472126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.479530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.479655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.487229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.487330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.487372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.494281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.494361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.494391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.501715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.501797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.501827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.508945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.509037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.509064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.516275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.516361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.516391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.523372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.523451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.523482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.530275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.530358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.530389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.537714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.537805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.537835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.544432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.544556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.544588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.552136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.552228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.552268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.559350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.559496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.559524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.756 [2024-10-07 09:53:06.566530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:11.756 [2024-10-07 09:53:06.566685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.756 [2024-10-07 09:53:06.566718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.572968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.573101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.573129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.579639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.579789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.579821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.585658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.585779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.585811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.592119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.592257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.592289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.599022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.599195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.599241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.606251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.606473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.613593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.613703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.613736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.621557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.621647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.621677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.627871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.627975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.628001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.634132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.634274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.640204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.640342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.647286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.647403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.647436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.654064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.654173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.654202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.660832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.015 [2024-10-07 09:53:06.660935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.015 [2024-10-07 09:53:06.660962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.015 [2024-10-07 09:53:06.667343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.667427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.667471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.673565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.673651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.673682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.679970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.680070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.680096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.686696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.686852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.686884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.693343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.693466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.693498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.700089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.700281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.700310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.706689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.706881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.713416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.713626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.719853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.720066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.720095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.726105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.726255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.726286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.732362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.732557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.732589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.738767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.738966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.738994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.745079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.745262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.751744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.751915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.758135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.758319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.758351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.764601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.764808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.764840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.771203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.771378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.771411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.778130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.778282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.778314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.784823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.785046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.785074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.791247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.791369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.791401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.798067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.798213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.798245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.804789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.804956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.804983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.811254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.811418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.811446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.818044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.818266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.016 [2024-10-07 09:53:06.825148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.016 [2024-10-07 09:53:06.825315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.016 [2024-10-07 09:53:06.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.832901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.833089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.275 [2024-10-07 09:53:06.833118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.840633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.840715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.275 [2024-10-07 09:53:06.840755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.848101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.848234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.275 [2024-10-07 09:53:06.848266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.855533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.855623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.275 [2024-10-07 09:53:06.855652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.863090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.863233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.275 [2024-10-07 09:53:06.863276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.275 [2024-10-07 09:53:06.870467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.275 [2024-10-07 09:53:06.870547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.870578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.877812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.877935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.884714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.884802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.884833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.891415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.891495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.891525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.897972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.898090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.904719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.904810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.904840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.911602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.911681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.911711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.918275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.918356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.918386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.924984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.925075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.925100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.931451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.931531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.931561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.938028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.938116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.938142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.944460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.944541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.944571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.951168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.951260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.951290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.957752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.957864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.964600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.964680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.964710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.970990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.971082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.971108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.977174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.977272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.977301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.983618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.983703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.983732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.989903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.990009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.990035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:06.996491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:06.996596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:06.996628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.276 [2024-10-07 09:53:07.003187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.276 [2024-10-07 09:53:07.003281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.276 [2024-10-07 09:53:07.003312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.277 [2024-10-07 09:53:07.009989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.277 [2024-10-07 09:53:07.010078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.277 [2024-10-07 09:53:07.010104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:12.277 [2024-10-07 09:53:07.016773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.277 [2024-10-07 09:53:07.016855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.277 [2024-10-07 09:53:07.016899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:12.277 [2024-10-07 09:53:07.023279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.277 [2024-10-07 09:53:07.023363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.277 [2024-10-07 09:53:07.023393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:12.277 [2024-10-07 09:53:07.029769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd52f80) with pdu=0x2000198fef90 00:32:12.277 [2024-10-07 09:53:07.029853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.277 [2024-10-07 09:53:07.029883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:12.277 4608.50 IOPS, 576.06 MiB/s 00:32:12.277 Latency(us) 00:32:12.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.277 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:12.277 nvme0n1 : 2.01 4607.20 575.90 0.00 0.00 3464.78 1953.94 9709.04 00:32:12.277 =================================================================================================================== 00:32:12.277 Total : 4607.20 575.90 0.00 0.00 3464.78 1953.94 9709.04 00:32:12.277 { 00:32:12.277 "results": [ 00:32:12.277 { 00:32:12.277 "job": "nvme0n1", 00:32:12.277 "core_mask": "0x2", 00:32:12.277 "workload": "randwrite", 00:32:12.277 "status": "finished", 00:32:12.277 "queue_depth": 16, 00:32:12.277 "io_size": 131072, 00:32:12.277 "runtime": 2.005123, 00:32:12.277 "iops": 4607.198660630795, 00:32:12.277 "mibps": 575.8998325788493, 00:32:12.277 "io_failed": 0, 00:32:12.277 "io_timeout": 0, 00:32:12.277 "avg_latency_us": 3464.7838464314063, 00:32:12.277 "min_latency_us": 1953.9437037037037, 00:32:12.277 "max_latency_us": 9709.037037037036 00:32:12.277 } 00:32:12.277 ], 00:32:12.277 "core_count": 1 00:32:12.277 } 00:32:12.277 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:12.277 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:12.277 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:12.277 | .driver_specific 00:32:12.277 | .nvme_error 00:32:12.277 | .status_code 00:32:12.277 | .command_transient_transport_error' 00:32:12.277 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 297 > 0 )) 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1660276 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1660276 ']' 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1660276 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1660276 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1660276' 00:32:12.843 killing process with pid 1660276 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1660276 00:32:12.843 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.843 00:32:12.843 Latency(us) 00:32:12.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.843 =================================================================================================================== 00:32:12.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.843 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1660276 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1658630 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1658630 ']' 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1658630 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658630 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658630' 00:32:13.101 killing process with pid 1658630 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1658630 00:32:13.101 09:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1658630 00:32:13.360 00:32:13.360 real 0m19.008s 00:32:13.360 user 0m40.128s 00:32:13.360 sys 0m5.403s 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:13.360 ************************************ 00:32:13.360 END TEST nvmf_digest_error 00:32:13.360 ************************************ 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.360 rmmod nvme_tcp 00:32:13.360 rmmod nvme_fabrics 00:32:13.360 rmmod nvme_keyring 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1658630 ']' 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1658630 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1658630 ']' 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1658630 00:32:13.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1658630) - No such process 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1658630 is not found' 00:32:13.360 Process with pid 1658630 is not found 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:13.360 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:32:13.622 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.622 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.622 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.622 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.622 09:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.553 00:32:15.553 real 0m43.100s 00:32:15.553 user 1m20.598s 00:32:15.553 sys 0m12.828s 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.553 ************************************ 00:32:15.553 END TEST nvmf_digest 00:32:15.553 ************************************ 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.553 ************************************ 00:32:15.553 START TEST nvmf_bdevperf 00:32:15.553 ************************************ 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:15.553 * Looking for test storage... 00:32:15.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:32:15.553 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.813 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:15.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.814 --rc genhtml_branch_coverage=1 00:32:15.814 --rc genhtml_function_coverage=1 00:32:15.814 --rc genhtml_legend=1 00:32:15.814 --rc geninfo_all_blocks=1 00:32:15.814 --rc geninfo_unexecuted_blocks=1 00:32:15.814 00:32:15.814 ' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:15.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.814 --rc genhtml_branch_coverage=1 00:32:15.814 --rc genhtml_function_coverage=1 00:32:15.814 --rc genhtml_legend=1 00:32:15.814 --rc geninfo_all_blocks=1 00:32:15.814 --rc geninfo_unexecuted_blocks=1 00:32:15.814 00:32:15.814 ' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:15.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.814 --rc genhtml_branch_coverage=1 00:32:15.814 --rc genhtml_function_coverage=1 00:32:15.814 --rc genhtml_legend=1 00:32:15.814 --rc geninfo_all_blocks=1 00:32:15.814 --rc geninfo_unexecuted_blocks=1 00:32:15.814 00:32:15.814 ' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:15.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.814 --rc genhtml_branch_coverage=1 00:32:15.814 --rc genhtml_function_coverage=1 00:32:15.814 --rc genhtml_legend=1 00:32:15.814 --rc geninfo_all_blocks=1 00:32:15.814 --rc geninfo_unexecuted_blocks=1 00:32:15.814 00:32:15.814 ' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.814 09:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:18.351 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.351 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:18.352 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:18.352 Found net devices under 0000:84:00.0: cvl_0_0 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:18.352 Found net devices under 0000:84:00.1: cvl_0_1 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.352 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:32:18.612 00:32:18.612 --- 10.0.0.2 ping statistics --- 00:32:18.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.612 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:32:18.612 00:32:18.612 --- 10.0.0.1 ping statistics --- 00:32:18.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.612 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1662775 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1662775 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1662775 ']' 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:18.612 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:18.612 [2024-10-07 09:53:13.382346] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:18.612 [2024-10-07 09:53:13.382460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.871 [2024-10-07 09:53:13.488181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.871 [2024-10-07 09:53:13.659755] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.871 [2024-10-07 09:53:13.659856] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.871 [2024-10-07 09:53:13.659907] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.871 [2024-10-07 09:53:13.659940] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.871 [2024-10-07 09:53:13.659979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.871 [2024-10-07 09:53:13.661634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.871 [2024-10-07 09:53:13.661690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.871 [2024-10-07 09:53:13.661694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 [2024-10-07 09:53:13.818007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 Malloc0 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:19.130 [2024-10-07 09:53:13.885328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:19.130 { 00:32:19.130 "params": { 00:32:19.130 "name": "Nvme$subsystem", 00:32:19.130 "trtype": "$TEST_TRANSPORT", 00:32:19.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.130 "adrfam": "ipv4", 00:32:19.130 "trsvcid": "$NVMF_PORT", 00:32:19.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.130 "hdgst": ${hdgst:-false}, 00:32:19.130 "ddgst": ${ddgst:-false} 00:32:19.130 }, 00:32:19.130 "method": "bdev_nvme_attach_controller" 00:32:19.130 } 00:32:19.130 EOF 00:32:19.130 )") 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:32:19.130 09:53:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:19.130 "params": { 00:32:19.130 "name": "Nvme1", 00:32:19.130 "trtype": "tcp", 00:32:19.130 "traddr": "10.0.0.2", 00:32:19.130 "adrfam": "ipv4", 00:32:19.130 "trsvcid": "4420", 00:32:19.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:19.130 "hdgst": false, 00:32:19.130 "ddgst": false 00:32:19.130 }, 00:32:19.130 "method": "bdev_nvme_attach_controller" 00:32:19.130 }' 00:32:19.388 [2024-10-07 09:53:13.960624] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:19.388 [2024-10-07 09:53:13.960745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662858 ] 00:32:19.388 [2024-10-07 09:53:14.053900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.388 [2024-10-07 09:53:14.167911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.955 Running I/O for 1 seconds... 00:32:20.889 8523.00 IOPS, 33.29 MiB/s 00:32:20.889 Latency(us) 00:32:20.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:20.889 Verification LBA range: start 0x0 length 0x4000 00:32:20.889 Nvme1n1 : 1.01 8604.71 33.61 0.00 0.00 14813.54 1401.74 14175.19 00:32:20.889 =================================================================================================================== 00:32:20.889 Total : 8604.71 33.61 0.00 0.00 14813.54 1401.74 14175.19 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1663067 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:21.146 { 00:32:21.146 "params": { 00:32:21.146 "name": "Nvme$subsystem", 00:32:21.146 "trtype": "$TEST_TRANSPORT", 00:32:21.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.146 "adrfam": "ipv4", 00:32:21.146 "trsvcid": "$NVMF_PORT", 00:32:21.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.146 "hdgst": ${hdgst:-false}, 00:32:21.146 "ddgst": ${ddgst:-false} 00:32:21.146 }, 00:32:21.146 "method": "bdev_nvme_attach_controller" 00:32:21.146 } 00:32:21.146 EOF 00:32:21.146 )") 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:32:21.146 09:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:21.146 "params": { 00:32:21.146 "name": "Nvme1", 00:32:21.146 "trtype": "tcp", 00:32:21.146 "traddr": "10.0.0.2", 00:32:21.146 "adrfam": "ipv4", 00:32:21.146 "trsvcid": "4420", 00:32:21.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.146 "hdgst": false, 00:32:21.146 "ddgst": false 00:32:21.146 }, 00:32:21.146 "method": "bdev_nvme_attach_controller" 00:32:21.146 }' 00:32:21.146 [2024-10-07 09:53:15.896494] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:21.146 [2024-10-07 09:53:15.896676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663067 ] 00:32:21.405 [2024-10-07 09:53:15.999624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.405 [2024-10-07 09:53:16.112017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.662 Running I/O for 15 seconds... 00:32:24.231 8475.00 IOPS, 33.11 MiB/s 8655.50 IOPS, 33.81 MiB/s 09:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1662775 00:32:24.231 09:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:24.231 [2024-10-07 09:53:18.827368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.231 [2024-10-07 09:53:18.827738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.231 [2024-10-07 09:53:18.827758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.827778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.827796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.827816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.827836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.827853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.827871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.827955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.827974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.827989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.232 [2024-10-07 09:53:18.828904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.828975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.828990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.232 [2024-10-07 09:53:18.829215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.232 [2024-10-07 09:53:18.829230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.233 [2024-10-07 09:53:18.830014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.233 [2024-10-07 09:53:18.830565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.233 [2024-10-07 09:53:18.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.830983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.830997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.234 [2024-10-07 09:53:18.831690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.831859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.831876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.832118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.832139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.234 [2024-10-07 09:53:18.832154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.234 [2024-10-07 09:53:18.832197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906ab0 is same with the state(6) to be set 00:32:24.234 [2024-10-07 09:53:18.832217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:24.234 [2024-10-07 09:53:18.832230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:24.235 [2024-10-07 09:53:18.832244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:32:24.235 [2024-10-07 09:53:18.832258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.235 [2024-10-07 09:53:18.832328] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x906ab0 was disconnected and freed. reset controller. 00:32:24.235 [2024-10-07 09:53:18.835925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.836009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.836718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.836772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.836791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.837055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.837307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.837332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.837350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.840912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.850228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.850742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.850793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.850812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.851061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.851304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.851327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.851343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.854924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.864186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.864683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.864733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.864752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.865016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.865261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.865284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.865300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.868857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.878106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.878596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.878628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.878647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.878884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.879140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.879164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.879180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.882736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.891985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.892443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.892475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.892493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.892730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.892985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.893010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.893026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.896582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.905827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.906320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.906352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.906371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.906609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.906859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.906883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.906910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.910473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.919726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.920229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.920261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.920279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.920517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.920759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.920782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.920797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.924388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.933685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.934126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.934158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.934176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.934414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.934656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.934680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.934695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.938270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.947515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.947988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.948021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.948039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.948276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.948519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.235 [2024-10-07 09:53:18.948543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.235 [2024-10-07 09:53:18.948559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.235 [2024-10-07 09:53:18.952136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.235 [2024-10-07 09:53:18.961375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.235 [2024-10-07 09:53:18.961826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.235 [2024-10-07 09:53:18.961857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.235 [2024-10-07 09:53:18.961875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.235 [2024-10-07 09:53:18.962122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.235 [2024-10-07 09:53:18.962366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:18.962389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:18.962405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:18.965972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:18.975225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.236 [2024-10-07 09:53:18.975661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.236 [2024-10-07 09:53:18.975693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.236 [2024-10-07 09:53:18.975711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.236 [2024-10-07 09:53:18.975963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.236 [2024-10-07 09:53:18.976206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:18.976230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:18.976245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:18.979805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:18.989259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.236 [2024-10-07 09:53:18.989700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.236 [2024-10-07 09:53:18.989732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.236 [2024-10-07 09:53:18.989750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.236 [2024-10-07 09:53:18.989999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.236 [2024-10-07 09:53:18.990242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:18.990266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:18.990282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:18.993866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:19.003114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.236 [2024-10-07 09:53:19.003597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.236 [2024-10-07 09:53:19.003629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.236 [2024-10-07 09:53:19.003653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.236 [2024-10-07 09:53:19.003902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.236 [2024-10-07 09:53:19.004145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:19.004169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:19.004184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:19.007740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:19.016994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.236 [2024-10-07 09:53:19.017474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.236 [2024-10-07 09:53:19.017505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.236 [2024-10-07 09:53:19.017523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.236 [2024-10-07 09:53:19.017760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.236 [2024-10-07 09:53:19.018014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:19.018039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:19.018054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:19.021627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:19.030899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.236 [2024-10-07 09:53:19.031318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.236 [2024-10-07 09:53:19.031350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.236 [2024-10-07 09:53:19.031368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.236 [2024-10-07 09:53:19.031605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.236 [2024-10-07 09:53:19.031847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.236 [2024-10-07 09:53:19.031871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.236 [2024-10-07 09:53:19.031887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.236 [2024-10-07 09:53:19.035463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.236 [2024-10-07 09:53:19.044920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.495 [2024-10-07 09:53:19.045386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.495 [2024-10-07 09:53:19.045419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.495 [2024-10-07 09:53:19.045438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.495 [2024-10-07 09:53:19.045675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.495 [2024-10-07 09:53:19.045930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.495 [2024-10-07 09:53:19.045961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.495 [2024-10-07 09:53:19.045977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.495 [2024-10-07 09:53:19.049536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.495 [2024-10-07 09:53:19.058851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.495 [2024-10-07 09:53:19.059321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.495 [2024-10-07 09:53:19.059353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.495 [2024-10-07 09:53:19.059371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.495 [2024-10-07 09:53:19.059610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.495 [2024-10-07 09:53:19.059852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.059875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.059902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.063461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.072702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.073220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.073272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.073290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.073528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.073770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.073794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.073809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.077379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.086624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.087117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.087149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.087168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.087405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.087647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.087671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.087687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.091257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.100508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.100937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.100970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.100988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.101227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.101469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.101493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.101508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.105071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.114524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.114965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.114998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.115016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.115254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.115496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.115520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.115535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.119107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.128377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.128847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.128879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.128906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.129146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.129388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.129412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.129428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.132997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.142249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.142690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.142743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.142760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.143016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.143260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.143284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.143299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.146858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.156118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.156605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.156654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.156671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.156920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.157162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.157186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.157201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.160757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.170013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.170486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.170517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.170534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.170772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.171025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.171049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.171065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.174634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.183886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.184383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.184415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.184433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.184671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.184927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.184952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.184974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.496 [2024-10-07 09:53:19.188533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.496 [2024-10-07 09:53:19.197772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.496 [2024-10-07 09:53:19.198178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.496 [2024-10-07 09:53:19.198210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.496 [2024-10-07 09:53:19.198227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.496 [2024-10-07 09:53:19.198464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.496 [2024-10-07 09:53:19.198707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.496 [2024-10-07 09:53:19.198730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.496 [2024-10-07 09:53:19.198746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.202315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.211773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.212252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.212284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.212302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.212540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.212781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.212806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.212821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.216393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.225666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.226099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.226131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.226149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.226386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.226628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.226652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.226668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.230236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.239686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.240195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.240227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.240245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.240483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.240725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.240748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.240763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.244334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.253578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.254018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.254049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.254067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.254305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.254547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.254570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.254586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.258157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.267444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.267925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.267957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.267976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.268213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.268456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.268480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.268495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.272068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.281309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.281782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.281832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.281850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.282100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.282349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.282374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.282389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.285952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.295192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.295687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.295719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.295737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.295988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.296231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.296255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.296271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.497 [2024-10-07 09:53:19.299829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.497 [2024-10-07 09:53:19.309082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.497 [2024-10-07 09:53:19.309553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.497 [2024-10-07 09:53:19.309584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.497 [2024-10-07 09:53:19.309602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.497 [2024-10-07 09:53:19.309839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.497 [2024-10-07 09:53:19.310093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.497 [2024-10-07 09:53:19.310118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.497 [2024-10-07 09:53:19.310133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.756 [2024-10-07 09:53:19.313722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.756 [2024-10-07 09:53:19.322984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.756 [2024-10-07 09:53:19.323463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.756 [2024-10-07 09:53:19.323495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.756 [2024-10-07 09:53:19.323513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.756 [2024-10-07 09:53:19.323752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.756 [2024-10-07 09:53:19.324007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.756 [2024-10-07 09:53:19.324031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.756 [2024-10-07 09:53:19.324047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.756 [2024-10-07 09:53:19.327628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.756 [2024-10-07 09:53:19.336871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.756 [2024-10-07 09:53:19.337406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.756 [2024-10-07 09:53:19.337439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.756 [2024-10-07 09:53:19.337457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.756 [2024-10-07 09:53:19.337695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.756 [2024-10-07 09:53:19.337952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.756 [2024-10-07 09:53:19.337978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.756 [2024-10-07 09:53:19.337994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.756 [2024-10-07 09:53:19.341556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.756 [2024-10-07 09:53:19.350811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.756 [2024-10-07 09:53:19.351265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.756 [2024-10-07 09:53:19.351297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.756 [2024-10-07 09:53:19.351315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.351553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.351794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.351819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.351836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.355405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.364651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.365122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.365153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.365171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.365409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.365651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.365675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.365691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.369263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.378521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.378954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.378986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.379011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.379250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.379492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.379515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.379530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.383106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.392361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.392833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.392865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.392883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.393132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.393375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.393399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.393413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.396977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.406223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.406674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.406706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.406724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.406974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.407218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.407242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.407257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.410815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.420069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.420500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.420539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.420557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.420806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.421063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.421094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.421111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.424671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.433943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.434432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.434464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.434482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.434719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.434974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.434999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.435014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 6885.00 IOPS, 26.89 MiB/s [2024-10-07 09:53:19.440332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.447903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.448339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.448370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.448388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.448626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.448869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.448902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.448920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.452478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.461715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.462219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.462251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.462269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.462506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.462749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.462772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.462787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.466357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.475650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.476103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.476142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.476160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.476398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.757 [2024-10-07 09:53:19.476639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.757 [2024-10-07 09:53:19.476663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.757 [2024-10-07 09:53:19.476678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.757 [2024-10-07 09:53:19.480245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.757 [2024-10-07 09:53:19.489481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.757 [2024-10-07 09:53:19.489939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.757 [2024-10-07 09:53:19.489971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.757 [2024-10-07 09:53:19.489989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.757 [2024-10-07 09:53:19.490226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.490468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.490492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.490507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.494074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.758 [2024-10-07 09:53:19.503315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.758 [2024-10-07 09:53:19.503759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.758 [2024-10-07 09:53:19.503790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.758 [2024-10-07 09:53:19.503808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.758 [2024-10-07 09:53:19.504058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.504301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.504324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.504339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.507904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.758 [2024-10-07 09:53:19.517142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.758 [2024-10-07 09:53:19.517615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.758 [2024-10-07 09:53:19.517647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.758 [2024-10-07 09:53:19.517670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.758 [2024-10-07 09:53:19.517921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.518164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.518187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.518203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.521758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.758 [2024-10-07 09:53:19.531021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.758 [2024-10-07 09:53:19.531488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.758 [2024-10-07 09:53:19.531520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.758 [2024-10-07 09:53:19.531538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.758 [2024-10-07 09:53:19.531776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.532031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.532056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.532071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.535627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.758 [2024-10-07 09:53:19.544866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.758 [2024-10-07 09:53:19.545322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.758 [2024-10-07 09:53:19.545357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.758 [2024-10-07 09:53:19.545375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.758 [2024-10-07 09:53:19.545613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.545854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.545878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.545903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.549465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.758 [2024-10-07 09:53:19.558704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.758 [2024-10-07 09:53:19.559186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.758 [2024-10-07 09:53:19.559218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:24.758 [2024-10-07 09:53:19.559236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:24.758 [2024-10-07 09:53:19.559473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:24.758 [2024-10-07 09:53:19.559715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.758 [2024-10-07 09:53:19.559744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.758 [2024-10-07 09:53:19.559761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.758 [2024-10-07 09:53:19.563332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.017 [2024-10-07 09:53:19.572581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.017 [2024-10-07 09:53:19.572971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.017 [2024-10-07 09:53:19.573003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.017 [2024-10-07 09:53:19.573022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.017 [2024-10-07 09:53:19.573260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.017 [2024-10-07 09:53:19.573501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.017 [2024-10-07 09:53:19.573526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.017 [2024-10-07 09:53:19.573542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.017 [2024-10-07 09:53:19.577124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.017 [2024-10-07 09:53:19.586586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.017 [2024-10-07 09:53:19.586999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.017 [2024-10-07 09:53:19.587031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.017 [2024-10-07 09:53:19.587049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.017 [2024-10-07 09:53:19.587287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.017 [2024-10-07 09:53:19.587529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.017 [2024-10-07 09:53:19.587553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.017 [2024-10-07 09:53:19.587569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.017 [2024-10-07 09:53:19.591147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.017 [2024-10-07 09:53:19.600609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.017 [2024-10-07 09:53:19.601008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.017 [2024-10-07 09:53:19.601040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.017 [2024-10-07 09:53:19.601058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.017 [2024-10-07 09:53:19.601296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.017 [2024-10-07 09:53:19.601538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.017 [2024-10-07 09:53:19.601563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.017 [2024-10-07 09:53:19.601578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.017 [2024-10-07 09:53:19.605146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.017 [2024-10-07 09:53:19.614592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.017 [2024-10-07 09:53:19.615005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.017 [2024-10-07 09:53:19.615037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.017 [2024-10-07 09:53:19.615055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.017 [2024-10-07 09:53:19.615292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.017 [2024-10-07 09:53:19.615535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.017 [2024-10-07 09:53:19.615559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.017 [2024-10-07 09:53:19.615574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.017 [2024-10-07 09:53:19.619150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.017 [2024-10-07 09:53:19.628628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.629014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.629046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.629064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.629302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.629545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.629569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.629585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.633171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.642623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.643047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.643078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.643097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.643334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.643577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.643600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.643615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.647191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.656664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.657163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.657195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.657213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.657457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.657700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.657723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.657739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.661305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.670568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.670994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.671026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.671044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.671281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.671524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.671548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.671563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.675126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.684423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.684906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.684939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.684957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.685195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.685439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.685462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.685477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.689041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.698424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.698817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.698848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.698866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.699117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.699361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.699385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.699407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.702975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.712424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.712920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.712953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.712971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.713210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.713451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.713475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.713490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.717062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.726297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.726713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.726744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.726762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.727025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.727268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.727293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.727308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.730864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.740320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.740813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.740866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.740884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.741132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.741376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.741399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.741414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.744981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.754234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.754665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.754701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.018 [2024-10-07 09:53:19.754719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.018 [2024-10-07 09:53:19.754970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.018 [2024-10-07 09:53:19.755213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.018 [2024-10-07 09:53:19.755237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.018 [2024-10-07 09:53:19.755252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.018 [2024-10-07 09:53:19.758811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.018 [2024-10-07 09:53:19.768093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.018 [2024-10-07 09:53:19.768538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.018 [2024-10-07 09:53:19.768590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.019 [2024-10-07 09:53:19.768608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.019 [2024-10-07 09:53:19.768846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.019 [2024-10-07 09:53:19.769098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.019 [2024-10-07 09:53:19.769123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.019 [2024-10-07 09:53:19.769138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.019 [2024-10-07 09:53:19.772699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.019 [2024-10-07 09:53:19.781959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.019 [2024-10-07 09:53:19.782446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.019 [2024-10-07 09:53:19.782497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.019 [2024-10-07 09:53:19.782515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.019 [2024-10-07 09:53:19.782752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.019 [2024-10-07 09:53:19.783006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.019 [2024-10-07 09:53:19.783031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.019 [2024-10-07 09:53:19.783046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.019 [2024-10-07 09:53:19.786600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.019 [2024-10-07 09:53:19.795855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.019 [2024-10-07 09:53:19.796271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.019 [2024-10-07 09:53:19.796327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.019 [2024-10-07 09:53:19.796345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.019 [2024-10-07 09:53:19.796583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.019 [2024-10-07 09:53:19.796832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.019 [2024-10-07 09:53:19.796856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.019 [2024-10-07 09:53:19.796872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.019 [2024-10-07 09:53:19.800452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.019 [2024-10-07 09:53:19.809713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.019 [2024-10-07 09:53:19.810085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.019 [2024-10-07 09:53:19.810117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.019 [2024-10-07 09:53:19.810135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.019 [2024-10-07 09:53:19.810371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.019 [2024-10-07 09:53:19.810614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.019 [2024-10-07 09:53:19.810638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.019 [2024-10-07 09:53:19.810653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.019 [2024-10-07 09:53:19.814231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.019 [2024-10-07 09:53:19.823719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.019 [2024-10-07 09:53:19.824111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.019 [2024-10-07 09:53:19.824172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.019 [2024-10-07 09:53:19.824190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.019 [2024-10-07 09:53:19.824428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.019 [2024-10-07 09:53:19.824676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.019 [2024-10-07 09:53:19.824700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.019 [2024-10-07 09:53:19.824715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.019 [2024-10-07 09:53:19.828302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.837598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.838079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.278 [2024-10-07 09:53:19.838145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.278 [2024-10-07 09:53:19.838164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.278 [2024-10-07 09:53:19.838420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.278 [2024-10-07 09:53:19.838668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.278 [2024-10-07 09:53:19.838688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.278 [2024-10-07 09:53:19.838702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.278 [2024-10-07 09:53:19.842284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.851538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.851986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.278 [2024-10-07 09:53:19.852016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.278 [2024-10-07 09:53:19.852032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.278 [2024-10-07 09:53:19.852282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.278 [2024-10-07 09:53:19.852525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.278 [2024-10-07 09:53:19.852548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.278 [2024-10-07 09:53:19.852563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.278 [2024-10-07 09:53:19.856223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.865457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.865964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.278 [2024-10-07 09:53:19.865992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.278 [2024-10-07 09:53:19.866009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.278 [2024-10-07 09:53:19.866253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.278 [2024-10-07 09:53:19.866497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.278 [2024-10-07 09:53:19.866521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.278 [2024-10-07 09:53:19.866536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.278 [2024-10-07 09:53:19.870055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.879350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.879832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.278 [2024-10-07 09:53:19.879863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.278 [2024-10-07 09:53:19.879881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.278 [2024-10-07 09:53:19.880123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.278 [2024-10-07 09:53:19.880381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.278 [2024-10-07 09:53:19.880406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.278 [2024-10-07 09:53:19.880421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.278 [2024-10-07 09:53:19.884038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.893224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.893730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.278 [2024-10-07 09:53:19.893761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.278 [2024-10-07 09:53:19.893785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.278 [2024-10-07 09:53:19.894044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.278 [2024-10-07 09:53:19.894286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.278 [2024-10-07 09:53:19.894311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.278 [2024-10-07 09:53:19.894327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.278 [2024-10-07 09:53:19.897887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.278 [2024-10-07 09:53:19.907167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.278 [2024-10-07 09:53:19.907600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.907651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.907669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.907918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.908161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.908192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.908207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.911770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.921055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.921488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.921538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.921556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.921794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.922053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.922077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.922092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.925654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.934958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.935447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.935500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.935518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.935757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.936013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.936043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.936059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.939624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.948883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.949329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.949378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.949396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.949634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.949876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.949911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.949928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.953486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.962745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.963160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.963213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.963231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.963470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.963711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.963736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.963751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.967327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.976583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.977043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.977090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.977108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.977345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.977588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.977612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.977627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.981201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:19.990451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:19.990927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:19.990959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:19.990977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:19.991215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:19.991457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:19.991480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:19.991496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:19.995070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:20.004765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:20.005236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:20.005291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:20.005310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:20.005551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:20.005795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:20.005819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:20.005835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:20.009432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:20.018696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:20.019139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:20.019197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:20.019216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:20.019454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:20.019697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:20.019722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:20.019738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:20.023344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:20.032280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:20.032760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:20.032800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:20.032817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.279 [2024-10-07 09:53:20.033064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.279 [2024-10-07 09:53:20.033283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.279 [2024-10-07 09:53:20.033305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.279 [2024-10-07 09:53:20.033320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.279 [2024-10-07 09:53:20.036632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.279 [2024-10-07 09:53:20.045836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.279 [2024-10-07 09:53:20.046306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.279 [2024-10-07 09:53:20.046347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.279 [2024-10-07 09:53:20.046362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.280 [2024-10-07 09:53:20.046557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.280 [2024-10-07 09:53:20.046774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.280 [2024-10-07 09:53:20.046794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.280 [2024-10-07 09:53:20.046807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.280 [2024-10-07 09:53:20.049942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.280 [2024-10-07 09:53:20.059248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.280 [2024-10-07 09:53:20.059690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.280 [2024-10-07 09:53:20.059721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.280 [2024-10-07 09:53:20.059749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.280 [2024-10-07 09:53:20.059991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.280 [2024-10-07 09:53:20.060217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.280 [2024-10-07 09:53:20.060252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.280 [2024-10-07 09:53:20.060265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.280 [2024-10-07 09:53:20.063510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.280 [2024-10-07 09:53:20.072854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.280 [2024-10-07 09:53:20.073383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.280 [2024-10-07 09:53:20.073435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.280 [2024-10-07 09:53:20.073453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.280 [2024-10-07 09:53:20.073698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.280 [2024-10-07 09:53:20.073928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.280 [2024-10-07 09:53:20.073949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.280 [2024-10-07 09:53:20.073985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.280 [2024-10-07 09:53:20.077535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.280 [2024-10-07 09:53:20.086761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.280 [2024-10-07 09:53:20.087191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.280 [2024-10-07 09:53:20.087222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.280 [2024-10-07 09:53:20.087241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.280 [2024-10-07 09:53:20.087479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.280 [2024-10-07 09:53:20.087721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.280 [2024-10-07 09:53:20.087745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.280 [2024-10-07 09:53:20.087761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.280 [2024-10-07 09:53:20.091439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.100851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.101348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.539 [2024-10-07 09:53:20.101375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.539 [2024-10-07 09:53:20.101391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.539 [2024-10-07 09:53:20.101642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.539 [2024-10-07 09:53:20.101885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.539 [2024-10-07 09:53:20.101923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.539 [2024-10-07 09:53:20.101939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.539 [2024-10-07 09:53:20.105539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.114713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.115231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.539 [2024-10-07 09:53:20.115257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.539 [2024-10-07 09:53:20.115289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.539 [2024-10-07 09:53:20.115527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.539 [2024-10-07 09:53:20.115769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.539 [2024-10-07 09:53:20.115793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.539 [2024-10-07 09:53:20.115808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.539 [2024-10-07 09:53:20.119394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.128693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.129200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.539 [2024-10-07 09:53:20.129258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.539 [2024-10-07 09:53:20.129277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.539 [2024-10-07 09:53:20.129516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.539 [2024-10-07 09:53:20.129758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.539 [2024-10-07 09:53:20.129782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.539 [2024-10-07 09:53:20.129797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.539 [2024-10-07 09:53:20.133358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.142639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.143128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.539 [2024-10-07 09:53:20.143161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.539 [2024-10-07 09:53:20.143179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.539 [2024-10-07 09:53:20.143423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.539 [2024-10-07 09:53:20.143663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.539 [2024-10-07 09:53:20.143688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.539 [2024-10-07 09:53:20.143703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.539 [2024-10-07 09:53:20.147266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.156497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.156913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.539 [2024-10-07 09:53:20.156944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.539 [2024-10-07 09:53:20.156962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.539 [2024-10-07 09:53:20.157200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.539 [2024-10-07 09:53:20.157441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.539 [2024-10-07 09:53:20.157478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.539 [2024-10-07 09:53:20.157491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.539 [2024-10-07 09:53:20.161012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.539 [2024-10-07 09:53:20.170407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.539 [2024-10-07 09:53:20.170844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.170868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.170909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.171132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.171371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.171391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.171403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.174543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.183965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.184433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.184458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.184488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.184702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.184947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.184968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.184982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.188194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.197503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.197972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.197998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.198027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.198247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.198445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.198465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.198477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.201627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.211222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.211670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.211709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.211724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.211961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.212207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.212228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.212256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.215295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.224576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.225041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.225082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.225098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.225317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.225515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.225535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.225548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.228721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.238146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.238608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.238648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.238664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.238858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.239106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.239144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.239157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.242241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.251542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.251965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.251991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.252021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.252254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.252473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.252493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.252506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.255616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.264983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.265458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.265484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.265519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.265714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.265956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.265978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.265991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.269116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.278540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.279014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.279054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.279070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.279283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.279501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.279521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.279535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.282723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.292178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.292605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.292644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.292659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.540 [2024-10-07 09:53:20.292887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.540 [2024-10-07 09:53:20.293141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.540 [2024-10-07 09:53:20.293162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.540 [2024-10-07 09:53:20.293176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.540 [2024-10-07 09:53:20.296292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.540 [2024-10-07 09:53:20.305802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.540 [2024-10-07 09:53:20.306280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.540 [2024-10-07 09:53:20.306320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.540 [2024-10-07 09:53:20.306335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.541 [2024-10-07 09:53:20.306530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.541 [2024-10-07 09:53:20.306728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.541 [2024-10-07 09:53:20.306753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.541 [2024-10-07 09:53:20.306766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.541 [2024-10-07 09:53:20.309881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.541 [2024-10-07 09:53:20.319381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.541 [2024-10-07 09:53:20.319804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.541 [2024-10-07 09:53:20.319832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.541 [2024-10-07 09:53:20.319847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.541 [2024-10-07 09:53:20.320086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.541 [2024-10-07 09:53:20.320333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.541 [2024-10-07 09:53:20.320353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.541 [2024-10-07 09:53:20.320365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.541 [2024-10-07 09:53:20.323473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.541 [2024-10-07 09:53:20.332922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.541 [2024-10-07 09:53:20.333390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.541 [2024-10-07 09:53:20.333416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.541 [2024-10-07 09:53:20.333430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.541 [2024-10-07 09:53:20.333639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.541 [2024-10-07 09:53:20.333837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.541 [2024-10-07 09:53:20.333857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.541 [2024-10-07 09:53:20.333885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.541 [2024-10-07 09:53:20.337097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.541 [2024-10-07 09:53:20.346281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.541 [2024-10-07 09:53:20.346699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.541 [2024-10-07 09:53:20.346739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.541 [2024-10-07 09:53:20.346753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.541 [2024-10-07 09:53:20.347009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.541 [2024-10-07 09:53:20.347258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.541 [2024-10-07 09:53:20.347279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.541 [2024-10-07 09:53:20.347294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.541 [2024-10-07 09:53:20.350571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.800 [2024-10-07 09:53:20.359933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.800 [2024-10-07 09:53:20.360436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.800 [2024-10-07 09:53:20.360477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.800 [2024-10-07 09:53:20.360493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.800 [2024-10-07 09:53:20.360715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.800 [2024-10-07 09:53:20.360977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.800 [2024-10-07 09:53:20.361000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.800 [2024-10-07 09:53:20.361015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.800 [2024-10-07 09:53:20.364307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.800 [2024-10-07 09:53:20.373491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.800 [2024-10-07 09:53:20.373930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.800 [2024-10-07 09:53:20.373971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.800 [2024-10-07 09:53:20.373987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.800 [2024-10-07 09:53:20.374202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.800 [2024-10-07 09:53:20.374400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.800 [2024-10-07 09:53:20.374420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.800 [2024-10-07 09:53:20.374432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.800 [2024-10-07 09:53:20.377642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.800 [2024-10-07 09:53:20.387028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.800 [2024-10-07 09:53:20.387489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.800 [2024-10-07 09:53:20.387514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.800 [2024-10-07 09:53:20.387544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.800 [2024-10-07 09:53:20.387758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.800 [2024-10-07 09:53:20.388008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.800 [2024-10-07 09:53:20.388030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.800 [2024-10-07 09:53:20.388043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.800 [2024-10-07 09:53:20.391241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.800 [2024-10-07 09:53:20.400552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.800 [2024-10-07 09:53:20.401008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.800 [2024-10-07 09:53:20.401036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.800 [2024-10-07 09:53:20.401067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.800 [2024-10-07 09:53:20.401298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.401503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.401523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.401536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.404714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.414018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.414443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.414481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.414496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.414690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.414913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.414948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.414963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.418132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.427549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.427987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.428019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.428049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.428284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.428501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.428521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.428534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.431801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 5163.75 IOPS, 20.17 MiB/s [2024-10-07 09:53:20.442667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.443113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.443161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.443178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.443390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.443608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.443629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.443647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.446862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.455998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.456422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.456462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.456477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.456671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.456884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.456914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.456927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.459898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.469389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.469840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.469866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.469902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.470111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.470328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.470349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.470361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.473385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.482754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.483239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.483265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.483279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.483488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.483686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.483705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.483718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.486687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.495957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.496390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.496419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.496450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.496645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.496843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.496863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.496899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.499911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.509199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.509634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.509673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.509688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.509908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.510134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.510155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.510183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.513217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.522506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.522908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.522935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.522951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.523152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.523367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.523387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.801 [2024-10-07 09:53:20.523399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.801 [2024-10-07 09:53:20.526404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.801 [2024-10-07 09:53:20.535832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.801 [2024-10-07 09:53:20.536234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.801 [2024-10-07 09:53:20.536274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.801 [2024-10-07 09:53:20.536289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.801 [2024-10-07 09:53:20.536497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.801 [2024-10-07 09:53:20.536701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.801 [2024-10-07 09:53:20.536720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.536733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.539739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.802 [2024-10-07 09:53:20.549163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.802 [2024-10-07 09:53:20.549634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.802 [2024-10-07 09:53:20.549659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.802 [2024-10-07 09:53:20.549673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.802 [2024-10-07 09:53:20.549905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.802 [2024-10-07 09:53:20.550132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.802 [2024-10-07 09:53:20.550153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.550181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.553153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.802 [2024-10-07 09:53:20.562423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.802 [2024-10-07 09:53:20.562803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.802 [2024-10-07 09:53:20.562842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.802 [2024-10-07 09:53:20.562856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.802 [2024-10-07 09:53:20.563099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.802 [2024-10-07 09:53:20.563335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.802 [2024-10-07 09:53:20.563355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.563368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.566332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.802 [2024-10-07 09:53:20.575771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.802 [2024-10-07 09:53:20.576288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.802 [2024-10-07 09:53:20.576313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.802 [2024-10-07 09:53:20.576326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.802 [2024-10-07 09:53:20.576535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.802 [2024-10-07 09:53:20.576734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.802 [2024-10-07 09:53:20.576754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.576766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.579778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.802 [2024-10-07 09:53:20.589301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.802 [2024-10-07 09:53:20.589666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.802 [2024-10-07 09:53:20.589692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.802 [2024-10-07 09:53:20.589707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.802 [2024-10-07 09:53:20.589928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.802 [2024-10-07 09:53:20.590147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.802 [2024-10-07 09:53:20.590185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.590200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.593293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.802 [2024-10-07 09:53:20.602744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:25.802 [2024-10-07 09:53:20.603127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.802 [2024-10-07 09:53:20.603156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:25.802 [2024-10-07 09:53:20.603188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:25.802 [2024-10-07 09:53:20.603401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:25.802 [2024-10-07 09:53:20.603600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:25.802 [2024-10-07 09:53:20.603621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:25.802 [2024-10-07 09:53:20.603634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:25.802 [2024-10-07 09:53:20.606702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.616498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.616943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.616970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.061 [2024-10-07 09:53:20.616999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.061 [2024-10-07 09:53:20.617207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.061 [2024-10-07 09:53:20.617421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.061 [2024-10-07 09:53:20.617441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.061 [2024-10-07 09:53:20.617454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.061 [2024-10-07 09:53:20.620627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.629716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.630084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.630126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.061 [2024-10-07 09:53:20.630146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.061 [2024-10-07 09:53:20.630374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.061 [2024-10-07 09:53:20.630572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.061 [2024-10-07 09:53:20.630592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.061 [2024-10-07 09:53:20.630604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.061 [2024-10-07 09:53:20.633589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.643090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.643554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.643594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.061 [2024-10-07 09:53:20.643609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.061 [2024-10-07 09:53:20.643804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.061 [2024-10-07 09:53:20.644051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.061 [2024-10-07 09:53:20.644073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.061 [2024-10-07 09:53:20.644087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.061 [2024-10-07 09:53:20.647073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.656369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.656801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.656841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.061 [2024-10-07 09:53:20.656856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.061 [2024-10-07 09:53:20.657100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.061 [2024-10-07 09:53:20.657337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.061 [2024-10-07 09:53:20.657357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.061 [2024-10-07 09:53:20.657370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.061 [2024-10-07 09:53:20.660359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.669681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.670131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.670171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.061 [2024-10-07 09:53:20.670186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.061 [2024-10-07 09:53:20.670397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.061 [2024-10-07 09:53:20.670596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.061 [2024-10-07 09:53:20.670620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.061 [2024-10-07 09:53:20.670634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.061 [2024-10-07 09:53:20.673644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.061 [2024-10-07 09:53:20.682991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.061 [2024-10-07 09:53:20.683432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.061 [2024-10-07 09:53:20.683472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.683487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.683682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.683905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.683926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.683955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.686948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.696197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.696636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.696661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.696692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.696912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.697137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.697159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.697187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.700158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.709430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.709884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.709929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.709944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.710166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.710381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.710401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.710414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.713422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.722755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.723213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.723258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.723274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.723469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.723666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.723686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.723698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.726705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.736002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.736468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.736508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.736524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.736718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.736943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.736965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.736994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.739989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.749258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.749672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.749697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.749711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.749972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.750198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.750219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.750232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.753209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.762475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.762895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.762941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.762956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.763176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.763393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.763413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.763425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.766420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.775680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.776139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.776166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.776196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.776427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.776625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.776645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.776657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.779665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.789047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.789494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.789519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.789549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.789743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.789967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.789988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.790002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.792973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.802345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.802782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.802806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.802836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.803060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.062 [2024-10-07 09:53:20.803278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.062 [2024-10-07 09:53:20.803298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.062 [2024-10-07 09:53:20.803317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.062 [2024-10-07 09:53:20.806312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.062 [2024-10-07 09:53:20.815578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.062 [2024-10-07 09:53:20.816021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.062 [2024-10-07 09:53:20.816063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.062 [2024-10-07 09:53:20.816079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.062 [2024-10-07 09:53:20.816292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.063 [2024-10-07 09:53:20.816491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.063 [2024-10-07 09:53:20.816511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.063 [2024-10-07 09:53:20.816524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.063 [2024-10-07 09:53:20.819571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.063 [2024-10-07 09:53:20.828908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.063 [2024-10-07 09:53:20.829342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.063 [2024-10-07 09:53:20.829369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.063 [2024-10-07 09:53:20.829384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.063 [2024-10-07 09:53:20.829579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.063 [2024-10-07 09:53:20.829777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.063 [2024-10-07 09:53:20.829797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.063 [2024-10-07 09:53:20.829809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.063 [2024-10-07 09:53:20.832834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.063 [2024-10-07 09:53:20.842159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.063 [2024-10-07 09:53:20.842640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.063 [2024-10-07 09:53:20.842666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.063 [2024-10-07 09:53:20.842696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.063 [2024-10-07 09:53:20.842918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.063 [2024-10-07 09:53:20.843145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.063 [2024-10-07 09:53:20.843166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.063 [2024-10-07 09:53:20.843194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.063 [2024-10-07 09:53:20.846164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.063 [2024-10-07 09:53:20.855453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.063 [2024-10-07 09:53:20.855961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.063 [2024-10-07 09:53:20.856010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.063 [2024-10-07 09:53:20.856027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.063 [2024-10-07 09:53:20.856234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.063 [2024-10-07 09:53:20.856467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.063 [2024-10-07 09:53:20.856488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.063 [2024-10-07 09:53:20.856501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.063 [2024-10-07 09:53:20.859481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.063 [2024-10-07 09:53:20.868681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.063 [2024-10-07 09:53:20.869162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.063 [2024-10-07 09:53:20.869188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.063 [2024-10-07 09:53:20.869217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.063 [2024-10-07 09:53:20.869418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.063 [2024-10-07 09:53:20.869623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.063 [2024-10-07 09:53:20.869643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.063 [2024-10-07 09:53:20.869657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.063 [2024-10-07 09:53:20.872785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.322 [2024-10-07 09:53:20.882279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.322 [2024-10-07 09:53:20.882725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.882750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.882779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.883021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.883272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.883308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.883320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.886291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.895601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.896036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.896073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.896102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.896315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.896520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.896540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.896552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.899595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.908964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.909349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.909390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.909404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.909613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.909812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.909832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.909845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.912813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.922193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.922556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.922582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.922598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.922811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.923039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.923059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.923073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.926061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.935493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.935861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.935912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.935929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.936130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.936346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.936366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.936378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.939402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.948864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.949273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.949314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.949328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.949536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.949735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.949754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.949767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.952794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.962128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.962585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.962624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.962640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.962834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.963080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.963102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.963116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.966107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.975337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.975738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.975778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.975793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.976049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.976287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.976308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.976321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.979320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:20.988627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:20.989035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:20.989061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:20.989096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:20.989310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:20.989509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:20.989529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:20.989541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:20.992515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:21.002086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:21.002551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:21.002591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.323 [2024-10-07 09:53:21.002607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.323 [2024-10-07 09:53:21.002801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.323 [2024-10-07 09:53:21.003033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.323 [2024-10-07 09:53:21.003055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.323 [2024-10-07 09:53:21.003069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.323 [2024-10-07 09:53:21.006112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.323 [2024-10-07 09:53:21.015425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.323 [2024-10-07 09:53:21.015863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.323 [2024-10-07 09:53:21.015912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.015928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.016155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.016369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.016389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.016401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.019571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.028738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.029145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.029172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.029188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.029398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.029597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.029622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.029635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.032626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.042009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.042443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.042468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.042497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.042692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.042913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.042942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.042955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.045967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.055278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.055717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.055756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.055771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.055991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.056197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.056231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.056244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.059264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.068551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.068939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.068967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.068997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.069219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.069433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.069453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.069466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.072438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.081798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.082145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.082172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.082201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.082397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.082595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.082614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.082627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.085637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.095120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.095574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.095614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.095630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.095824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.096055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.096077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.096090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.099102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.108383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.108838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.108864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.108902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.109124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.109365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.109386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.109399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.112366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.121645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.122014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.122040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.122056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.122275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.122473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.122493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.122506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.324 [2024-10-07 09:53:21.125525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.324 [2024-10-07 09:53:21.135271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.324 [2024-10-07 09:53:21.135738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.324 [2024-10-07 09:53:21.135794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.324 [2024-10-07 09:53:21.135820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.324 [2024-10-07 09:53:21.136070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.324 [2024-10-07 09:53:21.136314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.324 [2024-10-07 09:53:21.136337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.324 [2024-10-07 09:53:21.136352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.139927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.149207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.149616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.149667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.149685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.149935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.150179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.150202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.150218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.153783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.163054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.163494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.163525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.163542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.163780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.164033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.164059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.164080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.167648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.176919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.177349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.177402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.177420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.177657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.177912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.177937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.177953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.181514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.190765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.191194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.191248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.191266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.191503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.191745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.191768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.191784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.195358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.204599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.205036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.205089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.205107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.205345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.205587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.205610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.205625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.209193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.218441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.218887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.218933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.218951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.219189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.219432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.219455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.219471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.223050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.232323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.232774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.232822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.232840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.583 [2024-10-07 09:53:21.233089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.583 [2024-10-07 09:53:21.233332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.583 [2024-10-07 09:53:21.233357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.583 [2024-10-07 09:53:21.233372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.583 [2024-10-07 09:53:21.236938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.583 [2024-10-07 09:53:21.246178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.583 [2024-10-07 09:53:21.246640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.583 [2024-10-07 09:53:21.246690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.583 [2024-10-07 09:53:21.246707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.246957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.247200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.247224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.247239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.250794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.260047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.260556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.260607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.260625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.260862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.261122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.261146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.261162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.264723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.273972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.274459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.274491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.274509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.274747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.275004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.275029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.275045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.278600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.287838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.288301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.288352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.288370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.288607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.288849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.288873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.288889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.292463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.301699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.302209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.302258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.302275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.302513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.302756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.302779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.302795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.306371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.315613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.316087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.316140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.316159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.316397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.316639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.316662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.316677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.320253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.329499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.329956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.329988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.330006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.330245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.330487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.330511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.330526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.334113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.343402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.343856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.343887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.343918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.344157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.344399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.344423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.344438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.348005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.357254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.357747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.357800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.357824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.358074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.358316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.358341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.358356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.361949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.371221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.371634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.371665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.371684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.371934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.372177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.584 [2024-10-07 09:53:21.372202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.584 [2024-10-07 09:53:21.372218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.584 [2024-10-07 09:53:21.375779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.584 [2024-10-07 09:53:21.385251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.584 [2024-10-07 09:53:21.385686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.584 [2024-10-07 09:53:21.385718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.584 [2024-10-07 09:53:21.385736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.584 [2024-10-07 09:53:21.385986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.584 [2024-10-07 09:53:21.386228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.585 [2024-10-07 09:53:21.386252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.585 [2024-10-07 09:53:21.386267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.585 [2024-10-07 09:53:21.389825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.399084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.399573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.399630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.399648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.399886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.400138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.400168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.400184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.403754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.413030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.413475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.413507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.413524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.413762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.414017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.414042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.414057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.417619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.426915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.427396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.427428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.427446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.427683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.427939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.427964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.427980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.431537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.440806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.441313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.441345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.441362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 4131.00 IOPS, 16.14 MiB/s [2024-10-07 09:53:21.443362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.443603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.443627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.443643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.447212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.454777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.455268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.455300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.455318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.455556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.455798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.455822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.455837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.459408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.468649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.469161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.469193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.469211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.469449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.843 [2024-10-07 09:53:21.469691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.843 [2024-10-07 09:53:21.469715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.843 [2024-10-07 09:53:21.469730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.843 [2024-10-07 09:53:21.473301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.843 [2024-10-07 09:53:21.482543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.843 [2024-10-07 09:53:21.483042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.843 [2024-10-07 09:53:21.483074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.843 [2024-10-07 09:53:21.483093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.843 [2024-10-07 09:53:21.483330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.483574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.483597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.483612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.487201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.496441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.496863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.496903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.496923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.497167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.497410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.497434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.497449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.501016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.510468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.510950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.510982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.511000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.511237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.511480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.511503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.511518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.515089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.524332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.524780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.524812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.524830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.525080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.525323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.525346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.525362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.528931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.538193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.538669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.538700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.538718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.538968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.539211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.539235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.539259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.542819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.552116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.552603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.552656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.552674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.552923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.553166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.553190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.553205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.556759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.566002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.566493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.566543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.566561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.566798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.567053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.567088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.567104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.570665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.579915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.580416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.580447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.580465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.580704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.580958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.580983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.580998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.584559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.593802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.594273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.594305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.594323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.594561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.594802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.594826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.594841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.598410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.607658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.608050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.608082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.608099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.608336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.608578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.608602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.608618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.844 [2024-10-07 09:53:21.612192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.844 [2024-10-07 09:53:21.621498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.844 [2024-10-07 09:53:21.621955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.844 [2024-10-07 09:53:21.621989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.844 [2024-10-07 09:53:21.622008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.844 [2024-10-07 09:53:21.622245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.844 [2024-10-07 09:53:21.622488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.844 [2024-10-07 09:53:21.622512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.844 [2024-10-07 09:53:21.622528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.845 [2024-10-07 09:53:21.626095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.845 [2024-10-07 09:53:21.635355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.845 [2024-10-07 09:53:21.635804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.845 [2024-10-07 09:53:21.635836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.845 [2024-10-07 09:53:21.635854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.845 [2024-10-07 09:53:21.636101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.845 [2024-10-07 09:53:21.636350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.845 [2024-10-07 09:53:21.636375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.845 [2024-10-07 09:53:21.636390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.845 [2024-10-07 09:53:21.639963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.845 [2024-10-07 09:53:21.649214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.845 [2024-10-07 09:53:21.649697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.845 [2024-10-07 09:53:21.649746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:26.845 [2024-10-07 09:53:21.649763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:26.845 [2024-10-07 09:53:21.650013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:26.845 [2024-10-07 09:53:21.650256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.845 [2024-10-07 09:53:21.650279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.845 [2024-10-07 09:53:21.650295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.845 [2024-10-07 09:53:21.653859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.104 [2024-10-07 09:53:21.663115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.104 [2024-10-07 09:53:21.663607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.104 [2024-10-07 09:53:21.663657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.104 [2024-10-07 09:53:21.663676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.104 [2024-10-07 09:53:21.663924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.104 [2024-10-07 09:53:21.664167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.104 [2024-10-07 09:53:21.664191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.104 [2024-10-07 09:53:21.664206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.104 [2024-10-07 09:53:21.667761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.104 [2024-10-07 09:53:21.677019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.104 [2024-10-07 09:53:21.677477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.104 [2024-10-07 09:53:21.677509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.104 [2024-10-07 09:53:21.677527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.104 [2024-10-07 09:53:21.677765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.104 [2024-10-07 09:53:21.678020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.104 [2024-10-07 09:53:21.678045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.104 [2024-10-07 09:53:21.678060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.104 [2024-10-07 09:53:21.681760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.104 [2024-10-07 09:53:21.691016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.104 [2024-10-07 09:53:21.691505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.104 [2024-10-07 09:53:21.691536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.104 [2024-10-07 09:53:21.691554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.104 [2024-10-07 09:53:21.691791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.104 [2024-10-07 09:53:21.692046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.104 [2024-10-07 09:53:21.692072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.104 [2024-10-07 09:53:21.692087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.104 [2024-10-07 09:53:21.695650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.104 [2024-10-07 09:53:21.704904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.104 [2024-10-07 09:53:21.705366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.104 [2024-10-07 09:53:21.705397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.104 [2024-10-07 09:53:21.705415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.104 [2024-10-07 09:53:21.705653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.104 [2024-10-07 09:53:21.705906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.104 [2024-10-07 09:53:21.705931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.104 [2024-10-07 09:53:21.705946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.104 [2024-10-07 09:53:21.709505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.104 [2024-10-07 09:53:21.718748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.104 [2024-10-07 09:53:21.719245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.104 [2024-10-07 09:53:21.719277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.104 [2024-10-07 09:53:21.719295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.104 [2024-10-07 09:53:21.719532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.719774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.719797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.719813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.723381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.732629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.733082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.733123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.733147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.733386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.733640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.733665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.733680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.737251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.746492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.746981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.747013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.747031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.747269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.747512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.747536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.747551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.751122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.760409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.760872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.760947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.760966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.761204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.761446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.761470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.761486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.765054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.774291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.774800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.774846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.774864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.775112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.775356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.775385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.775401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.778969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.788215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.788702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.788753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.788772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.789022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.789265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.789289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.789303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.792862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.802115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.802601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.802652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.802669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.802918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.803161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.803185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.803200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.806760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1662775 Killed "${NVMF_APP[@]}" "$@" 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:27.105 [2024-10-07 09:53:21.816011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:27.105 [2024-10-07 09:53:21.816502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.105 [2024-10-07 09:53:21.816554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.816572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.816809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.817073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.817099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.817115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.820679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1663729 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1663729 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1663729 ']' 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.105 09:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.105 [2024-10-07 09:53:21.829975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.830435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.830488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.830507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.105 [2024-10-07 09:53:21.830744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.105 [2024-10-07 09:53:21.830999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.105 [2024-10-07 09:53:21.831025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.105 [2024-10-07 09:53:21.831040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.105 [2024-10-07 09:53:21.834623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.105 [2024-10-07 09:53:21.843899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.105 [2024-10-07 09:53:21.844391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.105 [2024-10-07 09:53:21.844441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.105 [2024-10-07 09:53:21.844459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.844696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.844951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.844975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.844991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.848559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.106 [2024-10-07 09:53:21.857822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.106 [2024-10-07 09:53:21.858233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.106 [2024-10-07 09:53:21.858285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.106 [2024-10-07 09:53:21.858303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.858541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.858783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.858807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.858822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.862392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.106 [2024-10-07 09:53:21.871657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.106 [2024-10-07 09:53:21.872081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.106 [2024-10-07 09:53:21.872143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.106 [2024-10-07 09:53:21.872162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.872399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.872641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.872664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.872679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.876246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.106 [2024-10-07 09:53:21.885494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.106 [2024-10-07 09:53:21.885871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.106 [2024-10-07 09:53:21.885911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.106 [2024-10-07 09:53:21.885930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.886169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.886411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.886435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.886450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.890018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.106 [2024-10-07 09:53:21.896589] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:27.106 [2024-10-07 09:53:21.896688] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.106 [2024-10-07 09:53:21.899475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.106 [2024-10-07 09:53:21.899887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.106 [2024-10-07 09:53:21.899928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.106 [2024-10-07 09:53:21.899946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.900185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.900427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.900452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.900468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.904037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.106 [2024-10-07 09:53:21.913499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.106 [2024-10-07 09:53:21.913943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.106 [2024-10-07 09:53:21.913974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.106 [2024-10-07 09:53:21.913993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.106 [2024-10-07 09:53:21.914231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.106 [2024-10-07 09:53:21.914473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.106 [2024-10-07 09:53:21.914497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.106 [2024-10-07 09:53:21.914513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.106 [2024-10-07 09:53:21.918091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.927347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.927778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.927831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.927849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.928098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.928340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.928364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.928379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:21.931945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.941217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.941650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.941703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.941721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.941977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.942221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.942245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.942260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:21.945820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.955145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.955593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.955643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.955661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.955910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.956136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.956156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.956169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:21.959689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.969038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.969453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.969507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.969525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.969762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.970020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.970041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.970055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:21.973539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.980455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:27.366 [2024-10-07 09:53:21.982835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.983260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.983318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.983336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.983574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.983817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.983848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.983866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:21.987370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:21.996664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:21.997182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:21.997233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:21.997254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:21.997499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:21.997746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:21.997770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:21.997787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:22.001296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:22.010572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:22.011008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:22.011035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:22.011064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.366 [2024-10-07 09:53:22.011295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.366 [2024-10-07 09:53:22.011538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.366 [2024-10-07 09:53:22.011562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.366 [2024-10-07 09:53:22.011579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.366 [2024-10-07 09:53:22.015086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.366 [2024-10-07 09:53:22.024454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.366 [2024-10-07 09:53:22.024876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.366 [2024-10-07 09:53:22.024918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.366 [2024-10-07 09:53:22.024953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.025154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.025411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.025436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.025452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.028974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.038260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.038711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.038763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.038781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.039040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.039278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.039304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.039320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.042811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.051594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.052149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.052199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.052218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.052479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.052727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.052752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.052770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.056302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.065483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.065935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.065968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.066000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.066201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.066460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.066484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.066500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.070014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.079389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.079831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.079881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.079911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.080145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.080389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.080414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.080429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.083966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.093250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.093684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.093737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.093755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.094008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.094232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.094257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.094273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.097785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.099072] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.367 [2024-10-07 09:53:22.099114] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.367 [2024-10-07 09:53:22.099131] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.367 [2024-10-07 09:53:22.099145] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.367 [2024-10-07 09:53:22.099156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.367 [2024-10-07 09:53:22.100206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.367 [2024-10-07 09:53:22.100255] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.367 [2024-10-07 09:53:22.100259] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.367 [2024-10-07 09:53:22.106681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.107158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.107205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.107225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.107454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.107668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.107690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.107706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.110864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.120217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.120688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.120734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.120753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.121007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.121246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.121269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.121284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.124478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.133659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.134148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.134200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.134219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.134435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.134651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.134672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.134688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.367 [2024-10-07 09:53:22.137886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.367 [2024-10-07 09:53:22.147310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.367 [2024-10-07 09:53:22.147836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.367 [2024-10-07 09:53:22.147872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.367 [2024-10-07 09:53:22.147916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.367 [2024-10-07 09:53:22.148141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.367 [2024-10-07 09:53:22.148375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.367 [2024-10-07 09:53:22.148398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.367 [2024-10-07 09:53:22.148414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.368 [2024-10-07 09:53:22.151653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.368 [2024-10-07 09:53:22.160844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.368 [2024-10-07 09:53:22.161352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.368 [2024-10-07 09:53:22.161401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.368 [2024-10-07 09:53:22.161420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.368 [2024-10-07 09:53:22.161642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.368 [2024-10-07 09:53:22.161856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.368 [2024-10-07 09:53:22.161903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.368 [2024-10-07 09:53:22.161923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.368 [2024-10-07 09:53:22.165153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.368 [2024-10-07 09:53:22.174452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.368 [2024-10-07 09:53:22.174980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.368 [2024-10-07 09:53:22.175022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.368 [2024-10-07 09:53:22.175042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.368 [2024-10-07 09:53:22.175280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.368 [2024-10-07 09:53:22.175496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.368 [2024-10-07 09:53:22.175517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.368 [2024-10-07 09:53:22.175534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.368 [2024-10-07 09:53:22.178843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.626 [2024-10-07 09:53:22.188126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.626 [2024-10-07 09:53:22.188523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.626 [2024-10-07 09:53:22.188565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.626 [2024-10-07 09:53:22.188581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.626 [2024-10-07 09:53:22.188803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.626 [2024-10-07 09:53:22.189046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.626 [2024-10-07 09:53:22.189069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.626 [2024-10-07 09:53:22.189083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.626 [2024-10-07 09:53:22.192284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.626 [2024-10-07 09:53:22.201639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.626 [2024-10-07 09:53:22.202013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.626 [2024-10-07 09:53:22.202042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.626 [2024-10-07 09:53:22.202059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.626 [2024-10-07 09:53:22.202274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.626 [2024-10-07 09:53:22.202494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.626 [2024-10-07 09:53:22.202516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.626 [2024-10-07 09:53:22.202538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.626 [2024-10-07 09:53:22.205833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.626 [2024-10-07 09:53:22.215120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.626 [2024-10-07 09:53:22.215499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.626 [2024-10-07 09:53:22.215541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.626 [2024-10-07 09:53:22.215557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.626 [2024-10-07 09:53:22.215785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.626 [2024-10-07 09:53:22.216016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.626 [2024-10-07 09:53:22.216039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.626 [2024-10-07 09:53:22.216053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.626 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.626 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:27.626 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:27.626 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.626 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.626 [2024-10-07 09:53:22.219359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.626 [2024-10-07 09:53:22.228761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.626 [2024-10-07 09:53:22.229140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.626 [2024-10-07 09:53:22.229169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.626 [2024-10-07 09:53:22.229185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.626 [2024-10-07 09:53:22.229408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.626 [2024-10-07 09:53:22.229619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.229640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.229653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 [2024-10-07 09:53:22.232925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.627 [2024-10-07 09:53:22.242328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.242716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.627 [2024-10-07 09:53:22.242744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.627 [2024-10-07 09:53:22.242761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.627 [2024-10-07 09:53:22.243009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.627 [2024-10-07 09:53:22.243244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.243265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.243278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 [2024-10-07 09:53:22.244381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.627 [2024-10-07 09:53:22.246502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 [2024-10-07 09:53:22.255742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.256115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.627 [2024-10-07 09:53:22.256144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.627 [2024-10-07 09:53:22.256161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.627 [2024-10-07 09:53:22.256385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.627 [2024-10-07 09:53:22.256597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.256618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.256631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 [2024-10-07 09:53:22.259870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.627 [2024-10-07 09:53:22.269356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.269756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.627 [2024-10-07 09:53:22.269803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.627 [2024-10-07 09:53:22.269820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.627 [2024-10-07 09:53:22.270085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.627 [2024-10-07 09:53:22.270320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.270342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.270356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 [2024-10-07 09:53:22.273592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 [2024-10-07 09:53:22.282803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.283325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.627 [2024-10-07 09:53:22.283374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.627 [2024-10-07 09:53:22.283393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.627 [2024-10-07 09:53:22.283636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.627 Malloc0 00:32:27.627 [2024-10-07 09:53:22.283858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.283880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.283905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.627 [2024-10-07 09:53:22.287162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.627 [2024-10-07 09:53:22.296460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.296844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.627 [2024-10-07 09:53:22.296885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f22c0 with addr=10.0.0.2, port=4420 00:32:27.627 [2024-10-07 09:53:22.296910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f22c0 is same with the state(6) to be set 00:32:27.627 [2024-10-07 09:53:22.297140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f22c0 (9): Bad file descriptor 00:32:27.627 [2024-10-07 09:53:22.297369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.627 [2024-10-07 09:53:22.297391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.627 [2024-10-07 09:53:22.297405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:27.627 [2024-10-07 09:53:22.300741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.627 [2024-10-07 09:53:22.303722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.627 09:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1663067 00:32:27.627 [2024-10-07 09:53:22.310003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.627 [2024-10-07 09:53:22.341438] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.942 3594.17 IOPS, 14.04 MiB/s 4360.86 IOPS, 17.03 MiB/s 4892.75 IOPS, 19.11 MiB/s 5347.33 IOPS, 20.89 MiB/s 5687.60 IOPS, 22.22 MiB/s 5973.45 IOPS, 23.33 MiB/s 6211.50 IOPS, 24.26 MiB/s 6415.46 IOPS, 25.06 MiB/s 6583.50 IOPS, 25.72 MiB/s 6730.80 IOPS, 26.29 MiB/s 00:32:36.942 Latency(us) 00:32:36.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.942 Verification LBA range: start 0x0 length 0x4000 00:32:36.942 Nvme1n1 : 15.01 6731.45 26.29 9032.33 0.00 8095.98 570.41 22330.79 00:32:36.942 =================================================================================================================== 00:32:36.942 Total : 6731.45 26.29 9032.33 0.00 8095.98 570.41 22330.79 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.942 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.942 rmmod nvme_tcp 00:32:37.201 rmmod nvme_fabrics 00:32:37.201 rmmod nvme_keyring 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1663729 ']' 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1663729 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1663729 ']' 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1663729 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663729 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663729' 00:32:37.201 killing process with pid 1663729 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1663729 00:32:37.201 09:53:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1663729 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.460 09:53:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.992 00:32:39.992 real 0m23.975s 00:32:39.992 user 1m2.261s 00:32:39.992 sys 0m5.235s 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:39.992 ************************************ 00:32:39.992 END TEST nvmf_bdevperf 00:32:39.992 ************************************ 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.992 ************************************ 00:32:39.992 START TEST nvmf_target_disconnect 00:32:39.992 ************************************ 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:39.992 * Looking for test storage... 00:32:39.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:39.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.992 --rc genhtml_branch_coverage=1 00:32:39.992 --rc genhtml_function_coverage=1 00:32:39.992 --rc genhtml_legend=1 00:32:39.992 --rc geninfo_all_blocks=1 00:32:39.992 --rc geninfo_unexecuted_blocks=1 00:32:39.992 00:32:39.992 ' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:39.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.992 --rc genhtml_branch_coverage=1 00:32:39.992 --rc genhtml_function_coverage=1 00:32:39.992 --rc genhtml_legend=1 00:32:39.992 --rc geninfo_all_blocks=1 00:32:39.992 --rc geninfo_unexecuted_blocks=1 00:32:39.992 00:32:39.992 ' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:39.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.992 --rc genhtml_branch_coverage=1 00:32:39.992 --rc genhtml_function_coverage=1 00:32:39.992 --rc genhtml_legend=1 00:32:39.992 --rc geninfo_all_blocks=1 00:32:39.992 --rc geninfo_unexecuted_blocks=1 00:32:39.992 00:32:39.992 ' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:39.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.992 --rc genhtml_branch_coverage=1 00:32:39.992 --rc genhtml_function_coverage=1 00:32:39.992 --rc genhtml_legend=1 00:32:39.992 --rc geninfo_all_blocks=1 00:32:39.992 --rc geninfo_unexecuted_blocks=1 00:32:39.992 00:32:39.992 ' 00:32:39.992 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.993 09:53:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:42.526 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.526 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:42.527 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:42.527 Found net devices under 0000:84:00.0: cvl_0_0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:42.527 Found net devices under 0000:84:00.1: cvl_0_1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:32:42.527 00:32:42.527 --- 10.0.0.2 ping statistics --- 00:32:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.527 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:42.527 00:32:42.527 --- 10.0.0.1 ping statistics --- 00:32:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.527 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:42.527 ************************************ 00:32:42.527 START TEST nvmf_target_disconnect_tc1 00:32:42.527 ************************************ 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:42.527 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.527 [2024-10-07 09:53:37.307423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.527 [2024-10-07 09:53:37.307496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ac620 with addr=10.0.0.2, port=4420 00:32:42.527 [2024-10-07 09:53:37.307538] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:42.528 [2024-10-07 09:53:37.307560] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.528 [2024-10-07 09:53:37.307577] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:42.528 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:42.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:42.528 Initializing NVMe Controllers 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.528 00:32:42.528 real 0m0.112s 00:32:42.528 user 0m0.048s 00:32:42.528 sys 0m0.063s 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:42.528 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:42.528 ************************************ 00:32:42.528 END TEST nvmf_target_disconnect_tc1 00:32:42.528 ************************************ 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:42.787 ************************************ 00:32:42.787 START TEST nvmf_target_disconnect_tc2 00:32:42.787 ************************************ 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1666921 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1666921 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1666921 ']' 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.787 09:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:42.787 [2024-10-07 09:53:37.475364] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:42.787 [2024-10-07 09:53:37.475444] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.787 [2024-10-07 09:53:37.563798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.046 [2024-10-07 09:53:37.741354] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.046 [2024-10-07 09:53:37.741478] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.046 [2024-10-07 09:53:37.741515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.046 [2024-10-07 09:53:37.741544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.046 [2024-10-07 09:53:37.741569] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.046 [2024-10-07 09:53:37.744786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:32:43.046 [2024-10-07 09:53:37.744952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:32:43.046 [2024-10-07 09:53:37.744866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:32:43.046 [2024-10-07 09:53:37.744956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.305 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 Malloc0 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 [2024-10-07 09:53:38.128375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 [2024-10-07 09:53:38.160814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1667069 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:43.563 09:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:45.481 09:53:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1666921 00:32:45.481 09:53:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:45.481 Read completed with error (sct=0, sc=8) 00:32:45.481 starting I/O failed 00:32:45.481 Read completed with error (sct=0, sc=8) 00:32:45.481 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 [2024-10-07 09:53:40.186990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 [2024-10-07 09:53:40.187319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Read completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 Write completed with error (sct=0, sc=8) 00:32:45.482 starting I/O failed 00:32:45.482 [2024-10-07 09:53:40.187689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.482 [2024-10-07 09:53:40.187938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.482 [2024-10-07 09:53:40.187977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.482 qpair failed and we were unable to recover it. 00:32:45.482 [2024-10-07 09:53:40.188126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.482 [2024-10-07 09:53:40.188154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.482 qpair failed and we were unable to recover it. 00:32:45.482 [2024-10-07 09:53:40.188320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.482 [2024-10-07 09:53:40.188344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.482 qpair failed and we were unable to recover it. 00:32:45.482 [2024-10-07 09:53:40.188578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.482 [2024-10-07 09:53:40.188630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.482 qpair failed and we were unable to recover it. 00:32:45.482 [2024-10-07 09:53:40.188816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.482 [2024-10-07 09:53:40.188878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.482 qpair failed and we were unable to recover it. 00:32:45.482 [2024-10-07 09:53:40.189040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.189180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.189371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.189545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.189750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.189922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.189948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.190047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.190072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.190248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.190285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.190398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.190422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.190591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.190615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.190791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.190816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.191037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.191249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.191417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.191643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.191816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.191975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.192145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.192331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.192523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.192727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.192955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.192982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.193118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.193143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.193285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.193323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.193508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.193533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.193636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.193659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.193895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.193921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.194902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.194929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.195061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.195087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.195198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.195227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.195369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.195393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.483 [2024-10-07 09:53:40.195543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.483 [2024-10-07 09:53:40.195567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.483 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.195728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.195752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.195985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.196113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.196337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.196527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.196697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.196867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.196912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.197059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.197085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.197224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.197263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.197443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.197467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.197646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.197669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.197832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.197856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.198045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.198071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.198166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.198209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.198392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.198421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.198611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.198663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.198812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.198837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.199897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.199940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.200099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.200125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.200248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.200320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.200496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.200535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.200665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.200690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.200857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.200906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.201068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.201094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.201266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.201290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.201493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.201547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.201698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.201723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.201905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.201932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.202078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.202121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.202229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.202254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.202385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.202410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.202550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.484 [2024-10-07 09:53:40.202594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.484 qpair failed and we were unable to recover it. 00:32:45.484 [2024-10-07 09:53:40.202675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.202701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.202837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.202861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.203924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.203965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.204086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.204126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.204251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.204290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.204456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.204495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.204626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.204665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.204852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.204876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.205026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.205236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.205441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.205673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.205850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.205975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.206150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.206402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.206578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.206738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.206918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.206945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.207065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.207106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.207277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.207302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.207497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.207521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.207665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.207690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.207945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.207970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.208167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.208215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.208370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.208394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.208573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.208616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.208756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.208780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.208921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.208952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.209141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.209170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.209338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.209397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.209536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.485 [2024-10-07 09:53:40.209575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.485 qpair failed and we were unable to recover it. 00:32:45.485 [2024-10-07 09:53:40.209759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.209783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.209931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.209976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.210122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.210149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.210335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.210363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.210552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.210576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.210720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.210745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.210914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.210939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.211105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.211135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.211280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.211304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.211479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.211521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.211672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.211696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.211876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.211906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.212073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.212098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.212244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.212283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.212419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.212457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.212642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.212666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.212829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.212852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.213940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.213967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.214145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.214183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.214305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.214335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.214522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.214565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.214718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.214755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.214952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.214981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.215111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.215155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.215284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.215322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.215489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.486 [2024-10-07 09:53:40.215528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.486 qpair failed and we were unable to recover it. 00:32:45.486 [2024-10-07 09:53:40.215695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.215718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.215867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.215914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.216871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.216900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.217070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.217109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.217299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.217348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.217502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.217525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.217761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.217785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.217910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.217936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.218078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.218104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.218315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.218358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.218521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.218580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.218721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.218745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.218915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.218956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.219109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.219133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.219279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.219317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.219420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.219444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.219620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.219644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.219833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.219856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.220941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.220966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.221903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.221944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.222074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.222098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.222301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.222325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.487 [2024-10-07 09:53:40.222503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.487 [2024-10-07 09:53:40.222527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.487 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.222667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.222707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.222848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.222888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.223035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.223076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.223242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.223265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.223451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.223474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.223627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.223652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.223839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.223864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.224046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.224075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.224254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.224277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.224481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.224525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.224674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.224698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.224840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.224864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.225073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.225116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.225243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.225302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.225492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.225516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.225686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.225726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.225876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.225908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.226056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.226096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.226262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.226287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.226479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.226527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.226701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.226724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.226887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.226946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.227966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.227990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.228139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.228181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.228349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.228378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.228529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.228557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.228739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.228762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.228933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.228971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.229141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.229422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.229472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.229618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.488 [2024-10-07 09:53:40.229641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.488 qpair failed and we were unable to recover it. 00:32:45.488 [2024-10-07 09:53:40.229770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.229794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.229932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.229959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.230108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.230134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.230309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.230337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.230524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.230547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.230692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.230717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.230884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.230920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.231100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.231141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.231303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.231325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.231516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.231558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.231723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.231747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.231927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.231957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.232116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.232158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.232372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.232427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.232618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.232641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.232797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.232821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.233057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.233205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.233393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.233644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.233834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.233990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.234171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.234398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.234527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.234725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.234937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.234975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.235117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.235142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.235327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.235356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.235477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.235502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.235681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.235704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.235901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.235927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.236062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.236086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.236255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.236284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.236453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.236512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.236689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.236712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.236887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.236917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.237068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.489 [2024-10-07 09:53:40.237112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.489 qpair failed and we were unable to recover it. 00:32:45.489 [2024-10-07 09:53:40.237266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.237306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.237468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.237492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.237676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.237704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.237887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.237920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.238105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.238134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.238275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.238325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.238503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.238526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.238640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.238664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.238834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.238873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.239034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.239060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.239217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.239241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.239425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.239481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.239666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.239690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.239871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.239922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.240096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.240137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.240302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.240325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.240493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.240518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.240654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.240695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.240897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.240922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.241878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.241910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.242954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.242980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.243129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.243172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.243307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.243345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.243526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.243549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.243702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.243725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.243871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.243918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.244054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.244080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.244248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.490 [2024-10-07 09:53:40.244287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.490 qpair failed and we were unable to recover it. 00:32:45.490 [2024-10-07 09:53:40.244428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.244451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.244618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.244657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.244804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.244829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.244972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.245168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.245308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.245500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.245699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.245901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.245928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.246087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.246131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.246279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.246302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.246485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.246514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.246679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.246701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.246863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.246927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.247105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.247146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.247297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.247342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.247521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.247543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.247722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.247745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.247916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.247940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.248115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.248138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.248239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.248261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.248466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.248525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.248701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.248724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.248896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.491 qpair failed and we were unable to recover it. 00:32:45.491 [2024-10-07 09:53:40.249965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.491 [2024-10-07 09:53:40.249995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.250144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.250170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.250285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.250309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.250487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.250512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.250659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.250683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.250837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.250861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.251951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.251992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.252132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.252156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.252337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.252361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.252534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.252564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.252716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.252755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.252919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.252943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.253152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.253276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.253458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.253636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.253840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.253991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.254194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.254378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.254575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.254742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.254958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.254988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.255094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.255120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.255243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.255269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.255423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.255447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.255638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.255662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.255803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.255843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.256040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.256250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.256450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.256626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.492 [2024-10-07 09:53:40.256765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.492 qpair failed and we were unable to recover it. 00:32:45.492 [2024-10-07 09:53:40.256938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.256964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.257124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.257149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.257344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.257392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.257572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.257596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.257719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.257759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.257851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.257897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.258083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.258109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.258239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.258277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.258478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.258536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.258720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.258744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.258862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.258887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.259028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.259053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.259252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.259298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.259441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.259483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.259625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.259667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.259801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.259827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.260930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.260961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.261128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.261154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.261318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.261341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.261483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.261507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.261664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.261704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.261873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.261918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.262039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.262071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.262233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.262277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.262425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.262450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.262600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.262640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.262835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.262859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.263071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.263259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.263462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.263635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.493 [2024-10-07 09:53:40.263842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.493 qpair failed and we were unable to recover it. 00:32:45.493 [2024-10-07 09:53:40.263981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.264172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.264359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.264497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.264657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.264864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.264897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.265970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.265998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed 00:32:45.494 [2024-10-07 09:53:40.266507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:45.494 [2024-10-07 09:53:40.266751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.266812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.266998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.267027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.267170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.267205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.267462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.267514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.267637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.267679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.267810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.267849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.268031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.268059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.268198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.268237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.268400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.268425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.268558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.268582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.268823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.268848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.269028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.269054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.269207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.269247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.269429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.269453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.269680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.494 [2024-10-07 09:53:40.269731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.494 qpair failed and we were unable to recover it. 00:32:45.494 [2024-10-07 09:53:40.269943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.269970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.270201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.270227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.270463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.270488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.270586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.270611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.270729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.270755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.270911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.270937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.271109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.271135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.271244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.271282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.271489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.271513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.271691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.271715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.271864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.271912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.272073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.272098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.272249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.272317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.272531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.272574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.272782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.272806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.272946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.272972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.273104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.273129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.273294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.273318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.273461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.273500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.273609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.273633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.273779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.273813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.274024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.274051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.274256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.274282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.274454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.274509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.274681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.274706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.274847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.274872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.275036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.275268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.275459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.275662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.275810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.275993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.276145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.276342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.276511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.276708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.276862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.495 [2024-10-07 09:53:40.276922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.495 qpair failed and we were unable to recover it. 00:32:45.495 [2024-10-07 09:53:40.277094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.277121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.277325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.277350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.277557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.277604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.277728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.277753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.277935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.277962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.278085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.278112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.278268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.278294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.278441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.278467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.278652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.278711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.278835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.278863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.279049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.279076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.279270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.279337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.279559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.279591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.279725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.279755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.280900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.280925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.281966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.281994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.282129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.282163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.282332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.282364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.282496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.282542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.282727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.282792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.282950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.282978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.283141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.283183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.283326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.283400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.496 qpair failed and we were unable to recover it. 00:32:45.496 [2024-10-07 09:53:40.283617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.496 [2024-10-07 09:53:40.283646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.283824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.283913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.284864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.284941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.285108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.285135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.285279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.285304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.285454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.285492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.285639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.285711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.285919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.285966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.286136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.286165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.497 [2024-10-07 09:53:40.286302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.497 [2024-10-07 09:53:40.286328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.497 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.286486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.286513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.286714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.286739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.286973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.286999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.287109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.287136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.287270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.287297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.777 qpair failed and we were unable to recover it. 00:32:45.777 [2024-10-07 09:53:40.287477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.777 [2024-10-07 09:53:40.287507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.287674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.287704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.287856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.287883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.288081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.288218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.288430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.288633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.288838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.288976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.289003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.289171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.289212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.289349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.289374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.289501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.289528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.289709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.289777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.290032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.290063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.290208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.290234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.290428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.290494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.290724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.290790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.291021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.291050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.291184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.291225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.291392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.291417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.291614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.291638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.291797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.291854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.292112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.292141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.292292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.292317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.292523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.292547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.292717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.292746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.292875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.292921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.778 qpair failed and we were unable to recover it. 00:32:45.778 [2024-10-07 09:53:40.293095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.778 [2024-10-07 09:53:40.293120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.293276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.293300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.293492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.293517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.293733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.293771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.293916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.293946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.294097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.294123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.294315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.294339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.294496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.294526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.294665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.294704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.294818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.294843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.295968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.295994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.296132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.296174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.296431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.296455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.296577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.296601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.296710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.296734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.296866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.296910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.297107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.297133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.297262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.297302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.297437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.297476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.297652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.297676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.297859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.297952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.298099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.779 [2024-10-07 09:53:40.298131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.779 qpair failed and we were unable to recover it. 00:32:45.779 [2024-10-07 09:53:40.298256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.298299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.298493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.298523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.298694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.298717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.298917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.298944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.299118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.299158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.299336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.299360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.299519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.299543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.299659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.299684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.299878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.299959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.300153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.300186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.300314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.300355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.300501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.300540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.300736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.300761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.300911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.300952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.301102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.301278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.301476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.301682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.301829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.301981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.302008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.302155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.302197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.302370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.302397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.302549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.302601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.302824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.302848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.303021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.303047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.303191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.303217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.303378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.780 [2024-10-07 09:53:40.303404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.780 qpair failed and we were unable to recover it. 00:32:45.780 [2024-10-07 09:53:40.303574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.303598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.303716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.303760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.303885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.303932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.304075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.304100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.304229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.304268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.304481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.304505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.304677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.304700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.304802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.304827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.305149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.305174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.305298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.305322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.305478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.305504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.305699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.305723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.305915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.305965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.306151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.306180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.306324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.306363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.306506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.306544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.306759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.306788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.306913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.306938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.307973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.307998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.308195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.308220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.308387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.308417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.308567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.781 [2024-10-07 09:53:40.308591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.781 qpair failed and we were unable to recover it. 00:32:45.781 [2024-10-07 09:53:40.308794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.308866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.309117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.309144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.309322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.309346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.309507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.309531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.309722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.309751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.309982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.310099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.310306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.310547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.310721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.310932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.310963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.311132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.311157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.311320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.311345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.311515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.311558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.311681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.311720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.311830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.311855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.312871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.312902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.313070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.313247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.313433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.313602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.782 [2024-10-07 09:53:40.313752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.782 qpair failed and we were unable to recover it. 00:32:45.782 [2024-10-07 09:53:40.313939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.313966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.314149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.314179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.314398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.314422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.314575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.314604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.314739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.314767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.314912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.314954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.315078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.315120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.315244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.315271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.315467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.315493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.315626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.315667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.315880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.315918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.316034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.316078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.316241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.316281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.316419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.316474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.316709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.316734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.316869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.316901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.317033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.317059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.317227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.317265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.317414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.317437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.317620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.317649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.317787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.317828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.318800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.783 [2024-10-07 09:53:40.318825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.783 qpair failed and we were unable to recover it. 00:32:45.783 [2024-10-07 09:53:40.319079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.319106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.319254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.319284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.319445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.319469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.319614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.319639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.319762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.319786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.319979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.320112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.320286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.320482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.320659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.320856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.320916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.321222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.321247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.321554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.321580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.321779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.321809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.322044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.322069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.322327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.322351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.322533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.322562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.322691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.322730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.322909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.322936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.323167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.323196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.784 [2024-10-07 09:53:40.323467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.784 [2024-10-07 09:53:40.323491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.784 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.323636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.323662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.323813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.323855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.324048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.324073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.324215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.324255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.324441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.324471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.324584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.324622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.324816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.324840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.325075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.325101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.325284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.325308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.325439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.325464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.325674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.325716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.325909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.325934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.326146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.326171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.326279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.326309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.326459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.326499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.326640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.326678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.326874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.326912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.327105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.327135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.327281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.327320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.327440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.327466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.327711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.327735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.327932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.327958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.328172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.328201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.328361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.328385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.328510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.328549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.328703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.328746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.785 [2024-10-07 09:53:40.328859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.785 [2024-10-07 09:53:40.328884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.785 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.329953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.329978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.330099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.330127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.330245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.330270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.330447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.330473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.330652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.330677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.330805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.330836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.331155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.331182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.331342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.331366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.331546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.331572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.331786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.331811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.331996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.332163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.332349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.332519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.332692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.332870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.332955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.333156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.333182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.333326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.333368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.333542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.333567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.333744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.333769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.333977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.334008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.334157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.786 [2024-10-07 09:53:40.334196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.786 qpair failed and we were unable to recover it. 00:32:45.786 [2024-10-07 09:53:40.334339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.334364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.334566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.334598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.334732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.334801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.335071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.335254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.335465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.335620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.335786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.335991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.336018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.336166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.336191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.336392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.336422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.336578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.336601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.336807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.336872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.337104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.337129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.337261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.337286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.337447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.337487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.337672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.337703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.337846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.337885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.338913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.338939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.339094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.339121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.339249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.339291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.339461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.787 [2024-10-07 09:53:40.339485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.787 qpair failed and we were unable to recover it. 00:32:45.787 [2024-10-07 09:53:40.339648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.339672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.339838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.339867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.340924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.340951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.341120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.341145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.341295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.341320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.341503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.341528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.341678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.341717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.341873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.341918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.342082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.342111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.342292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.342316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.342491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.342530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.342751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.342831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.343043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.343069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.343256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.343281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.343434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.343458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.343633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.343657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.343854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.343936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.344173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.344218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.344354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.344378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.344559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.344583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.344705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.344747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.344914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-10-07 09:53:40.344951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-10-07 09:53:40.345164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.345203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.345394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.345424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.345593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.345618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.345772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.345797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.345931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.345972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.346169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.346325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.346477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.346635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.346820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.346990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.347191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.347389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.347549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.347689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.347841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.347866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.348864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.348888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.349113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.349274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.349480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.349634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-10-07 09:53:40.349813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-10-07 09:53:40.349988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.350161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.350312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.350515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.350673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.350885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.350919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.351945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.351970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.352084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.352124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.352258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.352283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.352431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.352470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.352657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.352680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.352823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.352927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.353079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.353104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.353248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.353290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.353465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.353495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.353681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.353706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.353847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.353872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-10-07 09:53:40.354001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-10-07 09:53:40.354027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.354176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.354201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.354369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.354392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.354622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.354651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.354856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.354904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.355058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.355084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.355226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.355266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.355476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.355500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.355700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.355724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.355923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.355965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.356119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.356145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.356287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.356312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.356502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.356526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.356665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.356705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.356896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.356922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.357074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.357103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.357265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.357290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.357407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.357432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.357606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.357657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.357871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.357914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.358955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.358981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.359125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.359151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-10-07 09:53:40.359306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-10-07 09:53:40.359348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.359467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.359491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.359647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.359686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.359882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.359968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.360105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.360129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.360304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.360329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.360488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.360513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.360696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.360720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.360884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.360977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.361106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.361132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.361303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.361327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.361515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.361540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.361662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.361704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.361905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.361930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.362962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.362988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.363148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.363174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.363355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.363395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.363557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.363581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.363726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.363749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.363871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.363902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.364089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.364114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.364355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.364380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-10-07 09:53:40.364523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-10-07 09:53:40.364547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.364734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.364800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.365046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.365073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.365250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.365279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.365456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.365493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.365622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.365647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.365807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.365871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.366120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.366145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.366292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.366317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.366432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.366458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.366632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.366659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.366918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.366946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.367951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.367977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.368162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.368205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.368451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.368475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.368658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.368682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.368830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.368874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.369059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.369261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.369488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.369681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.369839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-10-07 09:53:40.369992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-10-07 09:53:40.370033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.370143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.370168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.370318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.370367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.370519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.370544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.370714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.370738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.370920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.370964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.371091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.371120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.371293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.371333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.371483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.371508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.371659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.371702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.371886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.371932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.372801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.372829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.373061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.373088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.373260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.373300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.373460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.373488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.373667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.373691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.373871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.373908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.374060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.374084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.374280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.374304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.374429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.374458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.374650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.374674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.374807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.374831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-10-07 09:53:40.375028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-10-07 09:53:40.375054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.375149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.375174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.375287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.375311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.375465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.375490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.375640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.375666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.375844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.375868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.376081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.376111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.376265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.376289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.376469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.376509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.376704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.376733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.376905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.376948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.377109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.377135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.377318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.377347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.377474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.377513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.377726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.377750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.377901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.377926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.378925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.378951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.379138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.379166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.379338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.379377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.379529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.379574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.379727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.379758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.379993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.380020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-10-07 09:53:40.380187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-10-07 09:53:40.380228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.380384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.380409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.380535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.380560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.380714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.380739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.380929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.380955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.381140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.381170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.381317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.381357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.381522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.381548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.381677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.381701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.381920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.381962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.382127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.382309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.382492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.382693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.382833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.382999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.383027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.383139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.383165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.383328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.383353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.383599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.383628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.383777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.383801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.384021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.384046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.384187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.384228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.384450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.384476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.384619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.384648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.384884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.384924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.385042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.796 [2024-10-07 09:53:40.385066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.796 qpair failed and we were unable to recover it. 00:32:45.796 [2024-10-07 09:53:40.385212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.385251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.385435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.385459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.385631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.385656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.385802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.385827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.385954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.385980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.386127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.386151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.386296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.386336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.386514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.386542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.386706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.386730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.386849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.386874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.387111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.387140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.387357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.387385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.387506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.387529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.387649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.387675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.387865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.387913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.388932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.388972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.389150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.389176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.389378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.389407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.797 [2024-10-07 09:53:40.389559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.797 [2024-10-07 09:53:40.389583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.797 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.389763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.389788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.389917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.389958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.390126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.390151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.390262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.390288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.390427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.390451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.390577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.390602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.390819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.390868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.391073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.391100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.391264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.391289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.391447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.391486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.391603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.391628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.391819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.391845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.392967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.392994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.393158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.393211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.393416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.393445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.393610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.393634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.393794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.393820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.393947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.393989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.394161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.394201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.394397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.394436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.394581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.394621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.798 [2024-10-07 09:53:40.394837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.798 [2024-10-07 09:53:40.394861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.798 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.395926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.395951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.396167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.396193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.396335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.396375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.396586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.396624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.396796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.396821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.397003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.397029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.397162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.397188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.397409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.397433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.397652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.397681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.397828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.397851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.398867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.398915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.799 [2024-10-07 09:53:40.399956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.799 [2024-10-07 09:53:40.399981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.799 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.400127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.400153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.400287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.400330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.400472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.400496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.400698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.400723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.400878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.400942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.401116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.401149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.401295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.401320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.401471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.401513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.401675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.401700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.401851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.401876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.402928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.402954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.403086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.403111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.403309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.403332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.403450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.403475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.403643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.403684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.403813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.403851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.404953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.404981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.405111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.800 [2024-10-07 09:53:40.405138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.800 qpair failed and we were unable to recover it. 00:32:45.800 [2024-10-07 09:53:40.405370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.405399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.405523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.405562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.405710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.405751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.405860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.405886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.406065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.406089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.406280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.406318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.406422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.406464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.406647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.406672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.406856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.406901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.407943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.407983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.408127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.408152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.408347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.408372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.408545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.408569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.408743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.408768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.408951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.408978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.409234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.409258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.409443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.409472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.409627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.409650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.409765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.409790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.410030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.410200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.410395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.410564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.801 [2024-10-07 09:53:40.410744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.801 qpair failed and we were unable to recover it. 00:32:45.801 [2024-10-07 09:53:40.410847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.410872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.411929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.411955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.412080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.412105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.412271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.412294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.412511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.412537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.412699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.412729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.412887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.412925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.413205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.413231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.413377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.413408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.413557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.413596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.413805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.413830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.414073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.414098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.414286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.414309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.414478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.414502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.414705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.414739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.414902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.414926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.415087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.415121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.415348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.415376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.415571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.415595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.415702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.415741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.415882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.415912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.416129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.416153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.416405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.416428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.416626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.416655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.802 [2024-10-07 09:53:40.416827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.802 [2024-10-07 09:53:40.416850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.802 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.417968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.417994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.418124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.418150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.418340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.418369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.418527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.418551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.418792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.418815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.418983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.419008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.419223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.419247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.419431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.419455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.419618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.419647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.419791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.419830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.420054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.420274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.420458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.420633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.420836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.420992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.421017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.421163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.421203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.421300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.421325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.421464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.421489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.803 qpair failed and we were unable to recover it. 00:32:45.803 [2024-10-07 09:53:40.421644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.803 [2024-10-07 09:53:40.421682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.421837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.421860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.421991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.422017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.422211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.422236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.422407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.422437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.422666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.422689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.422867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.422975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.423075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.423100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.423267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.423316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.423513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.423537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.423724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.423753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.423995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.424022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.424198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.424224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.424475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.424504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.424667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.424690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.424889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.424945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.425127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.425152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.425377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.425404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.425581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.425605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.425761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.425790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.425930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.425955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.426142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.426167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.426350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.426378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.426588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.426613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.426800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.426823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.427059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.427089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.427298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.427321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.427546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.427570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.427824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.427852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.804 [2024-10-07 09:53:40.427979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.804 [2024-10-07 09:53:40.428004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.804 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.428175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.428214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.428335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.428379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.428560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.428583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.428727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.428751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.428898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.428924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.429160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.429198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.429394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.429417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.429563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.429592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.429728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.429770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.430918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.430945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.431065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.431091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.431282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.431322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.431495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.431526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.431747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.431772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.431974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.432153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.432315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.432467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.432667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.432887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.432943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.433183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.433212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.433370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.433394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.433540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.433568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.433718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.805 [2024-10-07 09:53:40.433796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.805 qpair failed and we were unable to recover it. 00:32:45.805 [2024-10-07 09:53:40.434118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.434145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.434327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.434366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.434511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.434540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.434677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.434702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.434932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.434956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.435130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.435159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.435339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.435371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.435618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.435642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.435789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.435818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.435967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.435992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.436129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.436170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.436298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.436339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.436517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.436541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.436726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.436791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.437076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.437102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.437262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.437296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.437469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.437493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.437694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.437724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.437856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.437902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.438086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.438113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.438306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.438337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.438560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.438584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.438721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.438787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.439140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.439309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.439488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.439674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.439843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.439995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.440036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.440163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.440188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.440314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-10-07 09:53:40.440339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.806 qpair failed and we were unable to recover it. 00:32:45.806 [2024-10-07 09:53:40.440509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.440548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.440671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.440714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.440844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.440868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.441087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.441126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.441314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.441344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.441543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.441567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.441719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.441743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.441957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.441992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.442144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.442170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.442291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.442316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.442493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.442521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.442686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.442710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.442866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.442915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.443062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.443088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.443272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.443296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.443421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.443459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.443630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.443655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.443778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.443803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.444061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.444088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.444266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.444297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.444457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.444482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.444638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.444662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.444829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.444858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.445953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.445980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.446095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.446120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.446282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.446321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.807 qpair failed and we were unable to recover it. 00:32:45.807 [2024-10-07 09:53:40.446489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-10-07 09:53:40.446513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.446627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.446653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.446787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.446812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.446966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.446993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.447151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.447191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.447335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.447360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.447506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.447549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.447651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.447675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.447850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.447875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.448042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.448085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.448253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.448277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.448448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.448486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.448648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.448677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.448823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.448861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.449021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.449053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.449205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.449249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.449430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.449460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.449598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.449632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.449817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.449846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.450017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.450043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.450192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.450216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.450360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.450408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.450655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.450679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.450925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.450976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.451147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.451173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.451331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.451356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.451507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.451531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.451638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.451663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.451840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-10-07 09:53:40.451878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.808 qpair failed and we were unable to recover it. 00:32:45.808 [2024-10-07 09:53:40.452032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.452057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.452200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.452225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.452398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.452422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.452602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.452636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.452840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.452865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.453111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.453137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.453335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.453359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.453479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.453509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.453625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.453649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.453888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.453968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.454103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.454128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.454301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.454324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.454463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.454489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.454657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.454681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.454866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.454897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.455048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.455090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.455241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.455266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.455416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.455455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.455613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.455637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.455839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.455880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.456068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.456093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.456263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.456288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.456520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.456550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.456774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.456844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.457124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.457151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.457376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.457401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.457583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.457607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.457825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.457852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.457993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.458017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.458151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.458175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.458332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.458362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.458554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.458597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.458784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.458809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.809 [2024-10-07 09:53:40.459004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-10-07 09:53:40.459032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.809 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.459215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.459244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.459404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.459428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.459582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.459607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.459848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.459878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.460086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.460111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.460399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.460424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.460575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.460604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.460720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.460745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.460931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.460957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.461904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.461930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.462052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.462078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.462254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.462283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.462472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.462496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.462669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.462698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.462855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.462880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.463053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.463079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.463209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.463235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.463443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.463467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.463660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.463684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.463864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.463904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.464128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.464153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.464371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.464395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.464541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.464580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.464755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.464778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.464995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.465023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.465167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.465196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.465421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.465445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.465636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.465660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.810 [2024-10-07 09:53:40.465795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.810 [2024-10-07 09:53:40.465859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.810 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.466124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.466150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.466376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.466399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.466558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.466587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.466741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.466767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.466952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.466979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.467086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.467113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.467287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.467325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.467475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.467514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.467700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.467730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.467870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.467907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.468099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.468124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.468264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.468308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.468461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.468485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.468629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.468668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.468814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.468875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.469103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.469128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.469275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.469300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.469448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.469491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.469630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.469669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.469809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.469850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.470937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.470964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.471111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.471138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.471303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.471329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.471540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.471575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.471723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.471747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.471942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.471972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.472194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.472218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.472402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.472466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.472707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.472731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.472929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.472958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.811 qpair failed and we were unable to recover it. 00:32:45.811 [2024-10-07 09:53:40.473104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.811 [2024-10-07 09:53:40.473131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.473257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.473282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.473511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.473554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.473732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.473797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.474058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.474091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.474296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.474363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.474655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.474682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.474819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.474882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.475152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.475178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.475322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.475347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.475446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.475471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.475719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.475748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.475961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.475986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.476157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.476202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.476489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.476516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.476709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.476739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.476886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.476918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.477934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.477975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.478218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.478248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.478548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.478573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.478717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.478742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.478923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.478950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.479108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.479134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.479305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.479328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.479450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.479476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.479686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.479726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.479880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.479924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.480133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.480158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.480417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.480481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.480806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.480830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.481005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.481032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.812 qpair failed and we were unable to recover it. 00:32:45.812 [2024-10-07 09:53:40.481221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.812 [2024-10-07 09:53:40.481253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.481449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.481474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.481702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.481727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.481867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.481905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.482055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.482080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.482274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.482340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.482564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.482587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.482712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.482741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.482985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.483016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.483222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.483288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.483546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.483569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.483742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.483772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.483956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.483995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.484161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.484190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.484387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.484412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.484572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.484602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.484718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.484743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.484917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.484943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.485099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.485139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.485298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.485327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.485481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.485505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.485651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.485728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.486908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.486938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.487105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.487132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.487334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.487400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.487680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.813 [2024-10-07 09:53:40.487707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.813 qpair failed and we were unable to recover it. 00:32:45.813 [2024-10-07 09:53:40.487888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.487931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.488072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.488097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.488302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.488368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.488664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.488691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.488880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.488921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.489110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.489136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.489327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.489393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.489645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.489671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.489846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.489876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.490117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.490142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.490298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.490358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.490678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.490703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.490940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.490970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.491244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.491269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.491468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.491536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.491838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.491863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.492055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.492082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.492221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.492250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.492415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.492481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.492818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.492842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.493026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.493054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.493220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.493245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.493385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.493460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.493752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.493817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.494103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.494130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.494284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.494323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.494431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.494509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.494786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.494853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.495070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.495095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.495221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.495246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.495423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.495447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.495689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.495716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.495878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.495916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.496036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.496062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.496315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.496381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.496647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.814 [2024-10-07 09:53:40.496673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.814 qpair failed and we were unable to recover it. 00:32:45.814 [2024-10-07 09:53:40.496920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.496951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.497095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.497120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.497304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.497343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.497584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.497608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.497904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.497946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.498107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.498134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.498285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.498330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.498610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.498634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.498840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.498870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.499128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.499154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.499365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.499390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.499580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.499605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.499793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.499822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.499940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.499966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.500111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.500136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.500388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.500412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.500583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.500613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.500797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.500821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.501044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.501074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.501274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.501314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.501502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.501532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.501704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.501733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.501906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.501977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.502082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.502108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.502239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.502265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.502428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.502452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.502628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.502662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.502898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.502937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.503110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.503139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.503310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.503341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.503504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.503568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.503823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.503846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.504041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.504067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.504232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.504256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.504417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.504481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.504797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.504821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.505000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.505029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.815 [2024-10-07 09:53:40.505234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.815 [2024-10-07 09:53:40.505258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.815 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.505520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.505585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.505919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.505971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.506201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.506230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.506415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.506439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.506596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.506660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.506959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.506985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.507098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.507122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.507265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.507289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.507527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.507591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.507943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.507967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.508122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.508147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.508342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.508366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.508567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.508631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.508935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.508959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.509116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.509145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.509281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.509319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.509510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.509574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.509887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.509918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.510069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.510099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.510267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.510291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.510463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.510527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.510824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.510848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.510963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.510988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.511198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.511225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.511383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.511449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.511767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.511797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.511965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.511990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.512093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.512119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.512265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.512289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.512445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.512469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.512584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.512609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.512829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.512868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.513161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.513190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.513473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.513497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.513669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.513698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.513826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.513864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.514087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.514116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.514309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.816 [2024-10-07 09:53:40.514333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.816 qpair failed and we were unable to recover it. 00:32:45.816 [2024-10-07 09:53:40.514465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.514494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.514690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.514713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.515009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.515038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.515284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.515308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.515475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.515504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.515727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.515750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.515947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.516239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.516405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.516582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.516726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.516947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.516983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.517159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.517188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.517335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.517359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.517501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.517557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.517795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.517819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.518029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.518058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.518295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.518318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.518567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.518631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.518946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.518970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.519182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.519211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.519351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.519389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.519600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.519665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.519988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.520249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.520429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.520564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.520717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.520941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.520981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.521147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.521171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.521376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.521440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.521712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.521736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.521909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.521939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.522063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.522103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.522306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.522371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.522687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.522710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.522900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.522942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.523123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.523148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.523313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.523377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.523748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.523771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.524046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.524076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.524280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.524303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.524527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.524592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.524918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.524942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.525157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.525196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.525431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.525455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.525713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.525778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.526065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.526091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.526327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.526355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.817 qpair failed and we were unable to recover it. 00:32:45.817 [2024-10-07 09:53:40.526528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.817 [2024-10-07 09:53:40.526551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.526795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.526860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.527170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.527194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.527344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.527373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.527514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.527538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.527737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.527801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.528132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.528157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.528322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.528351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.528530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.528553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.528736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.528801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.529110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.529136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.529331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.529360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.529558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.529581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.529808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.529874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.530079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.530105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.530264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.530288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.530493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.530520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.530688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.530753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.530995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.531028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.531194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.531218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.531412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.531435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.531578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.531643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.531969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.531993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.532177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.532205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.532388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.532412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.532636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.532701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.533007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.533031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.533221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.533252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.533454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.533478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.533692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.533757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.534094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.534120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.534285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.534314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.534556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.534580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.534800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.534864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.535172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.535197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.535423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.535452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.535670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.535694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.535888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.535972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.536193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.536218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.536379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.536408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.536576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.536614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.536849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.536932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.537146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.537171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.537327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.537360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.537496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.537534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.537718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.537783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.538110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.538137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.818 qpair failed and we were unable to recover it. 00:32:45.818 [2024-10-07 09:53:40.538318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.818 [2024-10-07 09:53:40.538346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.538470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.538509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.538716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.538781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.539035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.539061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.539231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.539260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.539414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.539439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.539588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.539668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.539959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.539983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.540139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.540164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.540354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.540378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.540554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.540619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.540864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.540956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.541135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.541159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.541327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.541351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.541547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.541611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.541919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.541976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.542153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.542194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.542362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.542385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.542515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.542562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.542862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.542885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.543141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.543170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.543297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.543320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.543485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.543510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.543708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.543746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.543993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.544022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.544225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.544248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.544433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.544499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.544812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.544835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.544974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.544999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.545152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.545191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.545421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.545485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.545771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.545835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.546120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.546146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.546330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.546354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.546549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.546614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.546957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.546981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.547230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.547277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.547411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.547439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.547671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.547735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.548049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.548074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.548273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.548302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.548505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.548528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.548708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.548773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.549055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.549081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.549224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.549252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.549393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.549431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.549643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.549708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.549995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.550020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.550173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.550207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.550394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.550417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.550612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.550677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.550931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.550955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.551136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.551165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.551291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.819 [2024-10-07 09:53:40.551330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.819 qpair failed and we were unable to recover it. 00:32:45.819 [2024-10-07 09:53:40.551469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.551493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.551831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.551913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.552170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.552212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.552387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.552410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.552646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.552711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.553034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.553058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.553252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.553281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.553439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.553462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.553704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.553769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.554052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.554078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.554217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.554246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.554426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.554450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.554601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.554625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.554840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.554937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.555106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.555131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.555281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.555305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.555467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.555531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.555762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.555785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.555960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.555990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.556200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.556224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.556419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.556484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.556795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.556819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.556965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.557001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.557188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.557212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.557425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.557491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.557806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.557872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.558124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.558150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.558321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.558344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.558588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.558652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.558964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.558988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.559169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.559211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.559421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.559445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.559690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.559755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.560055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.560080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.560262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.560293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.560465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.560489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.560696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.560761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.561046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.561071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.561202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.561231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.561412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.561436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.561625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.561689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.561902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.561927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.562072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.562111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.562300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.562324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.562516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.562581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.562851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.562874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.563126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.563156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.563294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.563318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.563563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.563627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.563920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.563974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.564080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.564106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.564261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.564299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.564539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.564604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.564841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.564924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.820 qpair failed and we were unable to recover it. 00:32:45.820 [2024-10-07 09:53:40.565202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.820 [2024-10-07 09:53:40.565231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.565505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.565529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.565692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.565756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.566051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.566077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.566291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.566320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.566526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.566550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.566701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.566766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.567098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.567124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.567348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.567381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.567519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.567546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.567802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.567865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.568146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.568172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.568287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.568312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.568549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.568573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.568737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.568802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.569110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.569135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.569257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.569286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.569426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.569452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.569641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.569667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.569855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.569880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.570139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.570168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.570332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.570357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.570543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.570608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.570909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.570933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.571128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.571157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.571411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.571437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:45.821 [2024-10-07 09:53:40.571568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.821 [2024-10-07 09:53:40.571594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:45.821 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.571745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.571769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.572865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.572901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.573851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.573948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.574965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.574991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.575971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.575997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.576100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.576125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.576294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.576320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.576478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.576504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.576628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.576654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.576818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.576844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.577009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.577035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.124 qpair failed and we were unable to recover it. 00:32:46.124 [2024-10-07 09:53:40.577217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.124 [2024-10-07 09:53:40.577243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.577464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.577490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.577693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.577720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.577856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.577882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.578855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.578881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.579972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.579999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.580165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.580192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.580372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.580398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.580565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.580601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.580774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.580801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.580913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.580939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.581097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.581123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.581366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.581392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.581552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.581578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.581708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.581734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.581962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.581989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.582218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.582244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.582410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.582436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.582571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.582597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.582763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.582793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.582923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.582950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.583093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.583301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.583487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.583674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.583854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.583982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.584009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.584138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.584164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.125 [2024-10-07 09:53:40.584355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-10-07 09:53:40.584381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.125 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.584590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.584616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.584728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.584753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.584914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.584941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.585095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.585293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.585478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.585640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.585824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.585996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.586176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.586371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.586526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.586706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.586874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.586917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.587143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.587169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.587350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.587382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.587573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.587599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.587786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.587813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.588842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.588883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.589898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.589924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.590057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.590105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.590245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.590286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.590467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.590492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.590631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.590671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.590839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.590883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.591105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.591131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.591387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.126 [2024-10-07 09:53:40.591415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.126 qpair failed and we were unable to recover it. 00:32:46.126 [2024-10-07 09:53:40.591541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.591582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.591778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.591807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.591980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.592020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.592183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.592212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.592390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.592430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.592584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.592613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.592755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.592794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.593022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.593052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.593232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.593271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.593406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.593435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.593660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.593685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.593836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.593864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.594047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.594073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.594233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.594262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.594416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.594440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.594618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.594646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.594823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.594859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.595081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.595279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.595460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.595620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.595816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.595971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.596001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.596226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.596264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.596430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.596455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.596592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.596635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.596840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.596865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.597037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.597066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.597244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.597283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.597381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.597424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.597591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.597615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.597813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.597878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.598141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.598167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.598302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.598349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.598595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.598619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.598810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.598839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.127 [2024-10-07 09:53:40.599006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.127 [2024-10-07 09:53:40.599032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.127 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.599208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.599253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.599495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.599518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.599667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.599704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.599903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.599943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.600086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.600115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.600241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.600280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.600416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.600440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.600561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.600585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.600741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.600766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.601011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.601037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.601208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.601237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.601436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.601460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.601637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.601666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.601814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.601838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.602054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.602080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.602242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.602448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.602477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.602654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.602677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.602852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.602880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.603061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.603086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.603217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.603259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.603394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.603432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.603560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.603584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.603799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.603876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.604160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.604200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.604373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.604396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.604571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.604600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.604810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.604874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.605128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.605154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.605353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.605377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.605520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.605548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.605673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.128 [2024-10-07 09:53:40.605714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.128 qpair failed and we were unable to recover it. 00:32:46.128 [2024-10-07 09:53:40.605864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.605901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.606076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.606102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.606256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.606281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.606488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.606511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.606709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.606742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.606887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.606939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.607070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.607114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.607288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.607312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.607513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.607542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.607679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.607747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.608073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.608099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.608293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.608317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.608487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.608528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.608741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.608805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.609094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.609119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.609345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.609369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.609528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.609557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.609740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.609796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.610095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.610121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.610353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.610377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.610567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.610595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.610785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.610850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.611156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.611200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.611402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.611426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.611607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.611636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.611853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.611938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.612131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.612155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.612313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.612337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.612499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.612527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.612659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.612697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.612834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.612858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.613018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.613043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.613203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.613245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.613455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.613479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.613652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.613681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.613867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.613965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.614159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.614200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.614348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.129 [2024-10-07 09:53:40.614372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.129 qpair failed and we were unable to recover it. 00:32:46.129 [2024-10-07 09:53:40.614613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.614641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.614854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.614937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.615119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.615144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.615306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.615329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.615574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.615603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.615845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.615868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.616016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.616050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.616216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.616241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.616442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.616471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.616656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.616679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.616848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.616877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.617000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.617026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.617225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.617268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.617474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.617497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.617617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.617657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.617800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.617825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.618059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.618084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.618314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.618338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.618568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.618597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.618764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.618787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.619041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.619070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.619292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.619315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.619437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.619466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.619598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.619622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.619810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.619842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.620083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.620109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.620275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.620304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.620527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.620551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.620705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.620734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.620874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.620918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.621058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.621101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.621279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.621302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.621477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.621520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.621801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.621825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.622035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.622060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.622251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.622275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.622459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.622488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.622668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.130 [2024-10-07 09:53:40.622692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.130 qpair failed and we were unable to recover it. 00:32:46.130 [2024-10-07 09:53:40.622855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.622936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.623175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.623215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.623363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.623428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.623678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.623711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.623850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.623879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.624125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.624151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.624316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.624341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.624528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.624559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.624726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.624760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.624899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.624940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.625073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.625114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.625289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.625313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.625548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.625577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.625784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.625808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.625997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.626049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.626277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.626301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.626417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.626446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.626576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.626600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.626768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.626811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.626977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.627017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.627201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.627231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.627448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.627472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.627664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.627694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.627865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.627889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.628969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.628994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.629162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.629191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.629321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.629360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.629548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.629577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.629823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.629888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.630174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.630216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.630503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.630527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.630744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.630773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.630948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.630972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.131 qpair failed and we were unable to recover it. 00:32:46.131 [2024-10-07 09:53:40.631099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.131 [2024-10-07 09:53:40.631123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.631314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.631341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.631593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.631622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.631844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.631924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.632190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.632220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.632452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.632476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.632626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.632655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.632839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.632878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.633107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.633132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.633282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.633309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.633509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.633543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.633715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.633739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.633930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.633959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.634196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.634220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.634428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.634457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.634669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.634693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.634848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.634946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.635167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.635192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.635311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.635335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.635563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.635586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.635727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.635767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.635994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.636019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.636241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.636266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.636443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.636467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.636610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.636649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.636836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.636859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.637949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.637974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.638147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.638176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.638370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.638393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.638651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.638680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.638897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.638923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.639057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.639086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.639265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.639290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.639438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.132 [2024-10-07 09:53:40.639464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.132 qpair failed and we were unable to recover it. 00:32:46.132 [2024-10-07 09:53:40.639649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.639674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.639913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.639940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.640124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.640151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.640355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.640384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.640592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.640618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.640728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.640754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.640877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.640910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.641032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.641058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.641237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.641262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.641451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.641477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.641567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.641593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.641797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.641829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.642964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.642991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.643152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.643178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.643310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.643336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.643467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.643493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.643703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.643730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.643888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.643922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.644055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.644082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.644254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.644280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.644501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.644527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.644740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.644766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.644942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.644968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.645134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.645160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.645344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.645376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.645607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.645633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.645806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.645832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.645932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.645959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.133 qpair failed and we were unable to recover it. 00:32:46.133 [2024-10-07 09:53:40.646121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.133 [2024-10-07 09:53:40.646147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.646348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.646374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.646543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.646569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.646718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.646743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.646914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.646941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.647068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.647098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.647287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.647313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.647450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.647475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.647618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.647645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.647883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.647914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.648075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.648101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.648324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.648361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.648613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.648639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.648805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.648831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.649013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.649040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.649198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.649223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.649472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.649501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.649738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.649762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.649906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.649950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.650152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.650193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.650395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.650424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.650651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.650676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.650859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.650914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.651118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.651144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.651356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.651385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.651560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.651584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.651780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.651846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.652158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.652197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.652382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.652410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.652610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.652633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.652781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.652810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.652929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.652954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.653100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.653126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.653312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.653343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.653575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.653604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.653812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.653836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.653998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.654024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.654197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.654223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.654448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.134 [2024-10-07 09:53:40.654477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.134 qpair failed and we were unable to recover it. 00:32:46.134 [2024-10-07 09:53:40.654615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.654653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.654757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.654782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.654935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.654969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.655161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.655190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.655406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.655430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.655627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.655691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.655993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.656022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.656183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.656212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.656423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.656446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.656640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.656669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.656861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.656907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.657069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.657098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.657330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.657353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.657539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.657573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.657745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.657809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.658067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.658104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.658236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.658275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.658443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.658472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.658652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.658684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.658976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.659001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.659246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.659270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.659494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.659523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.659669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.659692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.659913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.659961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.660151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.660177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.660348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.660376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.660526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.660549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.660757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.660786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.660997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.661022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.661166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.661195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.661321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.661346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.661562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.661591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.661768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.661792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.662000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.662026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.662201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.662239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.662391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.662419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.662561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.662599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.662815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.135 [2024-10-07 09:53:40.662844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.135 qpair failed and we were unable to recover it. 00:32:46.135 [2024-10-07 09:53:40.663018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.663044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.663274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.663303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.663488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.663512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.663683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.663712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.663947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.663973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.664205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.664233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.664407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.664430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.664609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.664650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.664840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.664882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.665961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.665987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.666145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.666186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.666316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.666354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.666607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.666636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.666858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.666938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.667154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.667194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.667412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.667435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.667672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.667701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.667911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.667968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.668133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.668158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.668367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.668391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.668625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.668654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.668884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.668915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.669132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.669162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.669329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.669352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.669481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.669520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.669661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.669700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.669950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.669976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.670121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.670147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.670331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.670360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.670523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.670546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.670797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.670826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.670978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.671003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.671189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.671218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.136 [2024-10-07 09:53:40.671392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.136 [2024-10-07 09:53:40.671417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.136 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.671591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.671656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.671975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.672208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.672428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.672639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.672780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.672941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.672965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.673127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.673151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.673327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.673356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.673511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.673538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.673671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.673712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.673908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.673932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.674136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.674165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.674341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.674365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.674563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.674591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.674767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.674791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.675028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.675057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.675229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.675253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.675424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.675453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.675676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.675699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.675873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.675948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.676133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.676158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.676370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.676399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.676560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.676583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.676750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.676790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.676953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.676977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.677160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.677189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.677371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.677402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.677640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.677669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.677910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.677957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.678091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.678115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.678349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.678373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.678577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.137 [2024-10-07 09:53:40.678606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.137 qpair failed and we were unable to recover it. 00:32:46.137 [2024-10-07 09:53:40.678731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.678792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.679095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.679121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.679306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.679329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.679515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.679544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.679776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.679841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.680116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.680142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.680308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.680331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.680540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.680569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.680748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.680771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.680936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.680966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.681093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.681118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.681274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.681298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.681478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.681502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.681698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.681726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.681963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.681987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.682180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.682209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.682415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.682442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.682588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.682616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.682858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.682901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.683092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.683121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.683332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.683356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.683570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.683598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.683809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.683832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.683978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.684005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.684163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.684189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.684333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.684371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.684545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.684569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.684759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.684824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.685121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.685147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.685265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.685308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.685464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.685502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.685645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.685684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.685864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.685887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.686075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.686109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.686273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.686296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.686452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.686517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.686766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.686791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.686968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.138 [2024-10-07 09:53:40.686998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.138 qpair failed and we were unable to recover it. 00:32:46.138 [2024-10-07 09:53:40.687177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.687202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.687435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.687464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.687586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.687610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.687747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.687771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.687970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.687996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.688133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.688169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.688352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.688375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.688549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.688578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.688761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.688784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.688987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.689016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.689261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.689285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.689449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.689481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.689653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.689685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.689862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.689921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.690143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.690169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.690309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.690350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.690544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.690569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.690701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.690726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.690880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.690945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.691124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.691150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.691424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.691448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.691598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.691627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.691807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.691864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.692125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.692151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.692339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.692365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.692595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.692624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.692862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.692956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.693922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.693948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.694048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.694072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.694310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.694454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.694484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.694618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.694643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.694845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.694883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.139 qpair failed and we were unable to recover it. 00:32:46.139 [2024-10-07 09:53:40.695135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.139 [2024-10-07 09:53:40.695161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.695318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.695348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.695501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.695526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.695741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.695771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.695947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.695973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.696104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.696130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.696330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.696355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.696558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.696587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.696783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.696854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.697116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.697142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.697337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.697360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.697505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.697534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.697763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.697829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.698120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.698147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.698351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.698375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.698558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.698583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.698735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.698775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.698938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.698962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.699160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.699186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.699402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.699434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.699612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.699654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.699802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.699867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.700124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.700153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.700282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.700306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.700458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.700482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.700627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.700670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.700917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.700943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.701086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.701115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.701311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.701335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.701478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.701522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.701725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.701750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.701931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.701956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.702117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.702142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.702321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.702349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.702562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.702586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.702774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.702803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.702962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.702996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.703127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.703171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.140 [2024-10-07 09:53:40.703345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.140 [2024-10-07 09:53:40.703370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.140 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.703538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.703564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.703718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.703742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.703885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.703939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.704130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.704155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.704305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.704333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.704503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.704527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.704692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.704722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.704850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.704898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.705111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.705137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.705379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.705404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.705560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.705588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.705836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.705945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.706128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.706153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.706387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.706410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.706586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.706616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.706863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.706949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.707169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.707194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.707357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.707396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.707604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.707633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.707795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.707867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.708076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.708111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.708284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.708313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.708534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.708563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.708706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.708730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.708928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.708954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.709155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.709197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.709350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.709379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.709509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.709550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.709690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.709729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.709888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.709919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.710090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.710117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.710289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.710323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.710464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.710489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.710713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.710736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.710942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.710967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.711119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.711145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.711297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.711336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.711486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.141 [2024-10-07 09:53:40.711512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.141 qpair failed and we were unable to recover it. 00:32:46.141 [2024-10-07 09:53:40.711736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.711801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.712097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.712122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.712241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.712270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.712434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.712457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.712636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.712667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.712864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.712961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.713130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.713155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.713392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.713416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.713557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.713586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.713699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.713744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.713968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.713995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.714943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.714987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.715135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.715159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.715379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.715409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.715585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.715611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.715726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.715768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.715964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.715989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.716163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.716192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.716323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.716377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.716569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.716599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.716732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.716772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.716906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.716931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.717130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.717157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.717304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.717334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.717462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.717487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.717664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.717694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.717968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.717994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.718204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.718233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.142 [2024-10-07 09:53:40.718365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.142 [2024-10-07 09:53:40.718389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.142 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.718530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.718554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.718699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.718739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.718877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.718917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.719117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.719142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.719405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.719429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.719598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.719631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.719785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.719816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.720044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.720070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.720219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.720244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.720422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.720447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.720619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.720647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.720843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.720925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.721156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.721198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.721456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.721480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.721670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.721699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.721940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.721966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.722150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.722191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.722312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.722337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.722549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.722578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.722841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.722925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.723123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.723147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.723279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.723303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.723446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.723470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.723635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.723659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.723819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.723847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.724012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.724038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.724191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.724215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.724458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.724483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.724654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.724683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.724853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.724881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.725055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.725085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.725264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.725304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.725461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.725485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.725681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.725705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.725837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.725866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.726041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.726067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.726295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.726324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.726545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.143 [2024-10-07 09:53:40.726569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.143 qpair failed and we were unable to recover it. 00:32:46.143 [2024-10-07 09:53:40.726758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.726787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.726952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.726977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.727110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.727137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.727340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.727365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.727531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.727560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.727723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.727774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.728032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.728059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.728225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.728250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.728409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.728438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.728609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.728649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.728841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.728922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.729149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.729175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.729293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.729333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.729505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.729543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.729704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.729728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.729878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.729930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.730193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.730265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.730554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.730579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.730708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121d5f0 is same with the state(6) to be set 00:32:46.144 [2024-10-07 09:53:40.730920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.730970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.731213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.731254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.731484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.731508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.731638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.731662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.731847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.731872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.732060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.732093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.732300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.732325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.732518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.732543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.732743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.732768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.732986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.733148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.733283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.733515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.733787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.733962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.733988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.734117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.734157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.734379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.734421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.734655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.734679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.734959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.734985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.735118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.144 [2024-10-07 09:53:40.735143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.144 qpair failed and we were unable to recover it. 00:32:46.144 [2024-10-07 09:53:40.735352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.735376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.735502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.735527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.735742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.735767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.735944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.735970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.736170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.736213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.736405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.736453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.736691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.736736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.736865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.736919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.737093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.737136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.737238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.737279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.737436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.737484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.737719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.737744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.737921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.737963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.738097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.738142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.738292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.738343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.738583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.738609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.738801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.738825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.739001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.739047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.739274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.739314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.739493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.739542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.739735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.739760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.739946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.739972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.740072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.740112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.740306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.740335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.740551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.740576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.740728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.740751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.740901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.740939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.741122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.741147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.741380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.741443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.741640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.741690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.741801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.741826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.742937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.742964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.743108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.743135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.145 qpair failed and we were unable to recover it. 00:32:46.145 [2024-10-07 09:53:40.743319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.145 [2024-10-07 09:53:40.743369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.743652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.743677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.743838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.743862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.744070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.744113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.744369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.744393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.744598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.744650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.744900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.744925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.745918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.745959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.746147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.746173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.746336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.746402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.746522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.746548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.746676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.746702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.746861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.746886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.747917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.747943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.748165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.748292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.748455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.748606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.748822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.748994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.749021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.749121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.749146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.749303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.749341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.749496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.749552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.749687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.146 [2024-10-07 09:53:40.749725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.146 qpair failed and we were unable to recover it. 00:32:46.146 [2024-10-07 09:53:40.749869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.749925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.750913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.750940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.751939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.751966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.752957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.752983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.753974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.753999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.754161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.754186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.754281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.754307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.754460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.754499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.754632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.754657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.754816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.754855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.755839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.755879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.756025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.756050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.147 [2024-10-07 09:53:40.756152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.147 [2024-10-07 09:53:40.756192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.147 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.756294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.756322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.756436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.756461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.756624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.756649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.756792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.756818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.756976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.757145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.757343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.757501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.757698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.757902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.757928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.758105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.758307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.758479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.758654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.758838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.758964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.759150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.759345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.759490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.759634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.759808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.759833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.760843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.760883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.761928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.761979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.762127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.762241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.762393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.762559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.148 [2024-10-07 09:53:40.762707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.148 qpair failed and we were unable to recover it. 00:32:46.148 [2024-10-07 09:53:40.762898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.762924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.763116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.763314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.763461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.763628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.763833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.763989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.764150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.764333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.764515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.764709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.764900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.764927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.765849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.765882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.766862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.766979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.767919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.767945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.768094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.768119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.768284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.768323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.768469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.768525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.768699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.768723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.768895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.768920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.769062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.769087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.769246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.149 [2024-10-07 09:53:40.769286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.149 qpair failed and we were unable to recover it. 00:32:46.149 [2024-10-07 09:53:40.769473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.769524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.769667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.769690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.769864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.769896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.770897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.770924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.771083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.771249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.771431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.771643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.771802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.771976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.772165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.772335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.772502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.772659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.772826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.772849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.773918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.773943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.774120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.774144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.774261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.774285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.774476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.774530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.774652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.774675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.774831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.774856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.775061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.775229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.775404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.775587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.775782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.775979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.776005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.776132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.776158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.150 qpair failed and we were unable to recover it. 00:32:46.150 [2024-10-07 09:53:40.776341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.150 [2024-10-07 09:53:40.776367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.776536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.776561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.776688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.776713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.776834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.776875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.776982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.777196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.777341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.777549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.777751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.777911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.777938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.778860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.778906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.779862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.779888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.780836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.780861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.781875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.151 [2024-10-07 09:53:40.781906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.151 qpair failed and we were unable to recover it. 00:32:46.151 [2024-10-07 09:53:40.782005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.782205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.782411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.782577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.782745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.782900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.782927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.783857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.783882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.784954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.784981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.785139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.785364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.785530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.785695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.785836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.785981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.786931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.786973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.787148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.787331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.787494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.787653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.787822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.787984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.788009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.788133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.788158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.788351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.152 [2024-10-07 09:53:40.788402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.152 qpair failed and we were unable to recover it. 00:32:46.152 [2024-10-07 09:53:40.788549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.788574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.788723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.788763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.788920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.788947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.789102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.789298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.789492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.789650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.789825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.789979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.790140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.790290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.790481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.790656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.790863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.790888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.791116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.791318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.791487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.791664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.791885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.791999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.792187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.792339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.792543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.792739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.792919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.792959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.793118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.793143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.793298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.793337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.793462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.793491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.793639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.793665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.793808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.793833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.794053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.794219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.794441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.794662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.794837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.794975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.795001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.795133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.795158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.795317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.795341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.153 qpair failed and we were unable to recover it. 00:32:46.153 [2024-10-07 09:53:40.795517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.153 [2024-10-07 09:53:40.795542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.795692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.795730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.795922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.795962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.796123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.796178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.796394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.796435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.796631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.796655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.796796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.796820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.797025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.797051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.797246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.797271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.797485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.797532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.797716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.797740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.797982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.798125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.798305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.798484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.798691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.798860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.798886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.799951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.799992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.800155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.800196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.800375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.800400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.800538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.800562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.800672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.800697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.800885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.800917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.801082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.801108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.801287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.801312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.801482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.801507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.801655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.801680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.801799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.801825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.154 [2024-10-07 09:53:40.802939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.154 [2024-10-07 09:53:40.802965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.154 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.803145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.803185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.803310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.803335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.803496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.803521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.803641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.803666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.803850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.803898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.804049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.804075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.804277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.804302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.804452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.804495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.804637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.804663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.804830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.804854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.805077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.805103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.805281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.805306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.805549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.805597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.805768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.805791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.805932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.805966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.806149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.806179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.806368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.806392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.806626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.806670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.806870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.806914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.807038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.807067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.807277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.807305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.807480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.807532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.807670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.807694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.807850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.807875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.808941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.808968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.809097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.809124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.809292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.809331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.809455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.809479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.155 qpair failed and we were unable to recover it. 00:32:46.155 [2024-10-07 09:53:40.809593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.155 [2024-10-07 09:53:40.809618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.809780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.809804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.809949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.809975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.810128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.810287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.810466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.810630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.810826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.810994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.811021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.811154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.811185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.811358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.811381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.811586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.811610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.811789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.811815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.811978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.812828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.812984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.813118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.813312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.813483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.813680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.813839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.813863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.814022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.814061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.814231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.814255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.814416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.814444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.814619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.814645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.814782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.814816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.815961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.815987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.816136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.816165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.816319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.816357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.156 [2024-10-07 09:53:40.816517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.156 [2024-10-07 09:53:40.816557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.156 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.816653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.816677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.816850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.816896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.817051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.817077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.817227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.817266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.817428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.817452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.817598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.817638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.817797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.817832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.818834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.818858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.819043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.819246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.819421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.819618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.819764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.819969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.820162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.820335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.820571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.820784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.820928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.820955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.821116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.821143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.821254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.821279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.821426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.821465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.821632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.821657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.821848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.821875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.822952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.822977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.823121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.823146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.823316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.823341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.157 [2024-10-07 09:53:40.823483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.157 [2024-10-07 09:53:40.823507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.157 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.823610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.823634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.823761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.823786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.823921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.823959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.824970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.824997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.825158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.825288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.825459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.825635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.825836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.825996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.826193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.826365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.826501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.826689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.826826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.826851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.827047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.827074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.827211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.827253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.827363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.827402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.827562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.827587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.831972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.831999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.832123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.832156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.832320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.832345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.832518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.832544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.832700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.832740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.832915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.832941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.833077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.833104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.833242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.833285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.833436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.158 [2024-10-07 09:53:40.833461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.158 qpair failed and we were unable to recover it. 00:32:46.158 [2024-10-07 09:53:40.833610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.833634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.833818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.833843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.833964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.833990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.834137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.834163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.834297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.834339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.834530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.834555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.834657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.834681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.834858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.834912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.835853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.835995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.836131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.836302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.836475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.836693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.836833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.836859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.837957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.837984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.838083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.838109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.838250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.838277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.838461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.838487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.838702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.838756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.838908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.838935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.839087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.839128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.839261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.839296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.839492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.839544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.839704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.839729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.839909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.839935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.840045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.840070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.840212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.840241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.159 qpair failed and we were unable to recover it. 00:32:46.159 [2024-10-07 09:53:40.840433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.159 [2024-10-07 09:53:40.840487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.840638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.840689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.840794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.840820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.840986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.841106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.841283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.841478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.841682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.841856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.841905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.842937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.842963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.843846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.843871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.844858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.844976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.845187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.845402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.845606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.845745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.845921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.845948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.160 qpair failed and we were unable to recover it. 00:32:46.160 [2024-10-07 09:53:40.846076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.160 [2024-10-07 09:53:40.846102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.846284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.846327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.846505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.846547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.846720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.846745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.846910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.846952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.847088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.847113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.847246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.847285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.847470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.847519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.847689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.847715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.847848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.847887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.848830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.848855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.849826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.849984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.850142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.850316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.850511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.850733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.850916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.850957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.851064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.851089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.851224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.851268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.851424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.161 [2024-10-07 09:53:40.851448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.161 qpair failed and we were unable to recover it. 00:32:46.161 [2024-10-07 09:53:40.851614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.851638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.851740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.851766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.851910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.851935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.852910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.852935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.853939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.853965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.854116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.854318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.854506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.854664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.854869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.854975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.855130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.855276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.855445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.855662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.855839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.855862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.856926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.856951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.857904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.857932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.858065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.858090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.858185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.162 [2024-10-07 09:53:40.858226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.162 qpair failed and we were unable to recover it. 00:32:46.162 [2024-10-07 09:53:40.858366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.858404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.858558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.858582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.858726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.858751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.858868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.858910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.859876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.859994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.860197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.860350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.860526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.860692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.860862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.860886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.861897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.861922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.862924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.862950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.863926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.863952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.864083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.864109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.864259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.864346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.864502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.864525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.864708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.864748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.864907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.864947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.865079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.865104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.865238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.865278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.163 qpair failed and we were unable to recover it. 00:32:46.163 [2024-10-07 09:53:40.865422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.163 [2024-10-07 09:53:40.865460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.865638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.865663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.865845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.865869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.866057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.866282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.866477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.866627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.866807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.866981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.867837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.867991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.868145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.868363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.868535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.868745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.868929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.868955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.869907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.869933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.870093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.870290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.870475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.870661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.870856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.870992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.871166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.871333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.871481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.871669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.871852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.871877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.872041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.872066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.164 [2024-10-07 09:53:40.872165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.164 [2024-10-07 09:53:40.872188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.164 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.872339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.872364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.872511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.872534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.872685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.872723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.872861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.872906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.873018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.873042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.873192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.873216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.873416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.873441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.873658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.873682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.873805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.873829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.874056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.874082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.874241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.874296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.874448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.874471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.874653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.874677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.874816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.874841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.875902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.875928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.876808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.876990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.877212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.877352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.877589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.877788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.877940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.877965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.878132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.878264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.878504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.878696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.878847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.878989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.879035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.879161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.165 [2024-10-07 09:53:40.879186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.165 qpair failed and we were unable to recover it. 00:32:46.165 [2024-10-07 09:53:40.879357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.879395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.879516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.879540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.879711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.879736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.879875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.879906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.880079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.880122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.880319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.880344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.880555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.880580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.880762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.880787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.880962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.880989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.881144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.881169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.881302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.881327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.881460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.881489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.881626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.881666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.881844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.881868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.882014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.882040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.882215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.882249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.882385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.882410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.882609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.882648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.882790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.882815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.883803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.883986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.884146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.884332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.884526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.884715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.884927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.884952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.885120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.885163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.885331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.885355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.885515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.885539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.885716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.885741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.885834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.885860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.886046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.886073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.886180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.886209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.886359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.886398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.886636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.886660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.886852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.166 [2024-10-07 09:53:40.886877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.166 qpair failed and we were unable to recover it. 00:32:46.166 [2024-10-07 09:53:40.887062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.887222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.887389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.887564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.887758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.887925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.887951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.888936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.888961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.889138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.889178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.889327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.889350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.889507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.889533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.889663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.889701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.889831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.889855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.890896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.890922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.891056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.891081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.891274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.891298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.891469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.891492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.891662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.891686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.891812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.891859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.892046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.892085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.892250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.892298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.892441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.892469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.892669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.892695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.892827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.892852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.893961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.893987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.894188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.894223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.894358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.894384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.894548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.894580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.894782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.894813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.894957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.894985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.167 [2024-10-07 09:53:40.895118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.167 [2024-10-07 09:53:40.895144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.167 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.895267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.895294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.895422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.895447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.895567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.895593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.895739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.895765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.896922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.896948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.897856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.897988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.898113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.898303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.898543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.898764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.898951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.898978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.899932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.899958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.900902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.900932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.901837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.901996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.902026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.902191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.902220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.168 qpair failed and we were unable to recover it. 00:32:46.168 [2024-10-07 09:53:40.902380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.168 [2024-10-07 09:53:40.902403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.169 qpair failed and we were unable to recover it. 00:32:46.169 [2024-10-07 09:53:40.902532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.169 [2024-10-07 09:53:40.902574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.169 qpair failed and we were unable to recover it. 00:32:46.169 [2024-10-07 09:53:40.902722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.169 [2024-10-07 09:53:40.902760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.169 qpair failed and we were unable to recover it. 00:32:46.169 [2024-10-07 09:53:40.902910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.169 [2024-10-07 09:53:40.902949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.169 qpair failed and we were unable to recover it. 00:32:46.169 [2024-10-07 09:53:40.903111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.169 [2024-10-07 09:53:40.903157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.169 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.903324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.903351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.903479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.903506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.903692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.903718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.903847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.903883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.904961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.904987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.905951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.905977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.906136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.906318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.906440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.906603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.906818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.906983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.907008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.907144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.907170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.907298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.907328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.907486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.907512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.907647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.463 [2024-10-07 09:53:40.907674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.463 qpair failed and we were unable to recover it. 00:32:46.463 [2024-10-07 09:53:40.907805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.907831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.907949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.907975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.908885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.908928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.909857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.909884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.910837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.910864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.911828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.911984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.912886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.912918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.913028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.913055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.913202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.913228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.913391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.913416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.913541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.913568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.464 [2024-10-07 09:53:40.913724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.464 [2024-10-07 09:53:40.913755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.464 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.913849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.913874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.914041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.914068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.914250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.914276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.914449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.914488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.914652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.914676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.914824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.914865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.915965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.915991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.916153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.916194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.916339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.916381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.916533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.916576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.916751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.916791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.916933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.916959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.917087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.917225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.917436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.917638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.917833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.917985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.918150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.918340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.918491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.918693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.918903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.918930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.919972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.919999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.920135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.920161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.920283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.920323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.920461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.920490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.465 qpair failed and we were unable to recover it. 00:32:46.465 [2024-10-07 09:53:40.920664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.465 [2024-10-07 09:53:40.920689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.920804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.920831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.920992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.921038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.921194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.921222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.921393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.921417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.921574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.921613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.921728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.921764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.921986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.922151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.922372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.922542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.922680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.922899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.922926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.923969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.923996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.924127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.924154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.924312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.924335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.924484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.924507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.924681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.924705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.924863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.924909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.925937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.925964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.926082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.926226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.926349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.926578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.926846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.926981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.927008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.927143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.927170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.927348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.927387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.466 qpair failed and we were unable to recover it. 00:32:46.466 [2024-10-07 09:53:40.927594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.466 [2024-10-07 09:53:40.927643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.927828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.927853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.927992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.928157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.928298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.928531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.928758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.928938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.928964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.929927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.929954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.930929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.930956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.931086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.931265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.931431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.931635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.931837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.931995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.932105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.932272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.932440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.932651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.932810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.932835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.933004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.467 [2024-10-07 09:53:40.933031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.467 qpair failed and we were unable to recover it. 00:32:46.467 [2024-10-07 09:53:40.933133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.933159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.933297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.933321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.933498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.933522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.933668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.933693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.933860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.933907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.934931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.934974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.935953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.935981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.936139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.936341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.936545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.936670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.936847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.936998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.937151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.937350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.937498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.937673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.937876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.937907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.938883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.938917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.939026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.939053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.939154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.939199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.939309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.939334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.939474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.468 [2024-10-07 09:53:40.939501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.468 qpair failed and we were unable to recover it. 00:32:46.468 [2024-10-07 09:53:40.939676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.939701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.939856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.939902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.940884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.940918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.941964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.941992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.942096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.942126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.942293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.942318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.942464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.942490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.942671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.942696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.942847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.942888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.943867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.943898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.944828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.944999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.945124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.945282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.945449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.945613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.945793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.945819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.469 [2024-10-07 09:53:40.946704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.469 [2024-10-07 09:53:40.946734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.469 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.946929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.946956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.947921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.947948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.948959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.948986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.949910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.949937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.950971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.950997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.951965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.951991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.952152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.952178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.952336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.952363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.952487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.952512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.952688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.952713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.952845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.952884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.953000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.953025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.470 [2024-10-07 09:53:40.953121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.470 [2024-10-07 09:53:40.953148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.470 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.953267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.953301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.953442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.953483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.953573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.953598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.953768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.953793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.953915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.953945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.954077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.954120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.954291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.954317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.954476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.954504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.954641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.954695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.954845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.954872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.955062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.955250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.955528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.955706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.955865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.955995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.956170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.956315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.956495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.956657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.956854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.956912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.957930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.957959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.958072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.958099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.958216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.958249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.958417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.958481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.958700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.958729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.958872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.958931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.959040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.959066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.959182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.959223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.959411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.959435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.959631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.959697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.959907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.959951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.471 [2024-10-07 09:53:40.960088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.471 [2024-10-07 09:53:40.960114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.471 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.960360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.960384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.960630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.960660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.960911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.960974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.961083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.961110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.961254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.961280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.961441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.961466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.961653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.961733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.961971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.961999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.962102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.962129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.962306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.962341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.962517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.962557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.962673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.962702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.962839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.962864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.963941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.963968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.964109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.964135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.964320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.964349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.964551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.964623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.964802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.964832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.964951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.964978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.965076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.965103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.965292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.965323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.965499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.965524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.965720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.965746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.965904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.965945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.966053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.966080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.966262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.966303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.966438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.966466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.472 qpair failed and we were unable to recover it. 00:32:46.472 [2024-10-07 09:53:40.966620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.472 [2024-10-07 09:53:40.966659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.966786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.966839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.966990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.967923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.967960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.968067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.968093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.968266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.968291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.968490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.968519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.968653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.968679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.968833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.968859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.969960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.969987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.970850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.970976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.971846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.971993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.972127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.972312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.972584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.972774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.972968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.473 [2024-10-07 09:53:40.972994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.473 qpair failed and we were unable to recover it. 00:32:46.473 [2024-10-07 09:53:40.973120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.973165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.973302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.973342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.973444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.973469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.973641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.973684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.973797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.973831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.974856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.974979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.975816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.975972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.976166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.976355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.976535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.976745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.976913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.976952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.977052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.977078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.977247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.977287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.977445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.977470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.977643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.977674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.977831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.977857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.978966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.978993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.979098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.979124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.979292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.979316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.979491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.979517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.979672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.474 [2024-10-07 09:53:40.979698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.474 qpair failed and we were unable to recover it. 00:32:46.474 [2024-10-07 09:53:40.979910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.979936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.980044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.980070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.980258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.980292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.980487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.980556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.980789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.980818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.980957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.980984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.981093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.981119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.981273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.981310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.981443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.981468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.981612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.981636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.981799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.981864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.982047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.982072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.982193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.982219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.982346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.982385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.982550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.982573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.982768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.982835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.983948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.983973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.984069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.984094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.984290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.984330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.984467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.984497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.984732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.984761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.984915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.984942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.985036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.985061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.985209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.985249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.985424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.985448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.985634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.985700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.985953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.985979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.986963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.475 [2024-10-07 09:53:40.986989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.475 qpair failed and we were unable to recover it. 00:32:46.475 [2024-10-07 09:53:40.987087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.987213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.987363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.987552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.987671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.987885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.987942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.988038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.988064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.988275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.988317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.988497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.988528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.988744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.988767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.988908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.988939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.989073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.989221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.989435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.989634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.989805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.989994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.990020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.990117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.990143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.990319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.990358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.990522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.990587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.990856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.990947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.991928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.991954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.992075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.992101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.992263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.992302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.992466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.992494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.992651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.992692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.992835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.992860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.993036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.993063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.993244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.993284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.993451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.993476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.993620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.476 [2024-10-07 09:53:40.993662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.476 qpair failed and we were unable to recover it. 00:32:46.476 [2024-10-07 09:53:40.993846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.993869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.994037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.994202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.994423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.994569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.994808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.994982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.995008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.995111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.995138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.995284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.995333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.995537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.995561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.995762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.995827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.996909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.996936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.997930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.997957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.998870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.998903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:40.999850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.477 [2024-10-07 09:53:40.999874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.477 qpair failed and we were unable to recover it. 00:32:46.477 [2024-10-07 09:53:41.000017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.000232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.000432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.000611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.000733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.000926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.000953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.001949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.001974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.002886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.002942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.003861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.003885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.004933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.004960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.005081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.005112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.005270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.005296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.005424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.005447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.005601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.005627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.005779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.005857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.006027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.006051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.006242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.006285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.478 [2024-10-07 09:53:41.006405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.478 [2024-10-07 09:53:41.006430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.478 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.006590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.006616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.006763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.006788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.006950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.006975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.007911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.007949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.008921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.008957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.009099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.009125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.009280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.009338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.009550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.009575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.009732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.009772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.009971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.010249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.010419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.010590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.010746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.010886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.010930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.011855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.011997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.012167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.012313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.012442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.012596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.012785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.012850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.479 [2024-10-07 09:53:41.013065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.479 [2024-10-07 09:53:41.013092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.479 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.013204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.013229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.013433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.013458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.013619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.013662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.013816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.013840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.013977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.014003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.014134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.014188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.014377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.014402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.014526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.014552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.014744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.014813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.014997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.015023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.015111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.015138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.015320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.015399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.015614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.015640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.015887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.015932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.016070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.016095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.016295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.016319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.016525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.016549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.016719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.016799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.016997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.017136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.017303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.017474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.017683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.017835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.017860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.018927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.018953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.019059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.019084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.020955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.020984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.480 [2024-10-07 09:53:41.021081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.480 [2024-10-07 09:53:41.021112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.480 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.021252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.021292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.021431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.021456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.021599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.021623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.021804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.021828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.021986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.022125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.022309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.022488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.022675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.022933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.022965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.023122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.023148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.023318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.023342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.023455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.023533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.023777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.023804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.023966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.023993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.024943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.024970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.025918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.025956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.026049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.026076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.026265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.026289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.026434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.026474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.026588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.026614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.026818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.481 [2024-10-07 09:53:41.026843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.481 qpair failed and we were unable to recover it. 00:32:46.481 [2024-10-07 09:53:41.027000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.027027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.027155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.027198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.027362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.027434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.027628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.027655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.027818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.027869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.028910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.028936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.029060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.029283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.029476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.029633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.029855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.029982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.030928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.030953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.031085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.031111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.031215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.031269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.031510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.031540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.031761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.031827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.032920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.032947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.033042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.033068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.033234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.033273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.033425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.033449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.033588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.033628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.482 [2024-10-07 09:53:41.033754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.482 [2024-10-07 09:53:41.033779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.482 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.033941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.033969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.034870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.034906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.035919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.035944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.036062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.036225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.036407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.036566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.036803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.036985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.037943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.037968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.038098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.038125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.038221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.038247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.038469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.038510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.038678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.038704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.038863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.038901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.039023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.039063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.039280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.039311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.039472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.039552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.039794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.039821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.039949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.039977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.040112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.040140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.483 [2024-10-07 09:53:41.040235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.483 [2024-10-07 09:53:41.040261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.483 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.040402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.040431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.040627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.040694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.040980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.041170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.041317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.041515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.041683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.041853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.041879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.042036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.042076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.042273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.042302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.042473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.042499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.042601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.042638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.042846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.042888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.043033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.043153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.043274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.043469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.043742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.043975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.044895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.044994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.045020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.045146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.045187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.045305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.045349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.045503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.045544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.045698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.045773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.045987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.046856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.046990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.484 [2024-10-07 09:53:41.047016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.484 qpair failed and we were unable to recover it. 00:32:46.484 [2024-10-07 09:53:41.047119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.047145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.047300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.047361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.047595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.047625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.047743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.047782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.047938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.047965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.048869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.048902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.049088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.049115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.049238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.049289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.049436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.049462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.049592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.049617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.049804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.049845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.050942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.050968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.051945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.051973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.052907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.052959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.053067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.053094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.053236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.053263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.485 [2024-10-07 09:53:41.053407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.485 [2024-10-07 09:53:41.053462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.485 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.053675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.053712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.053846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.053885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.054850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.054896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.055038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.055184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.055380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.055565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.055842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.055993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.056118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.056270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.056470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.056644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.056954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.056981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.057955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.057984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.058091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.058117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.058268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.058294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.058451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.058479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.058602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.058626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.058819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.058884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.059051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.059077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.059243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.059283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.486 [2024-10-07 09:53:41.059398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.486 [2024-10-07 09:53:41.059423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.486 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.059569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.059594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.059736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.059775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.059927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.059954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.060056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.060082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.060242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.060267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.060387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.060412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.060600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.060629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.060777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.060811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.061853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.061975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.062953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.062979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.063083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.063109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.063246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.063286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.063484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.063508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.063690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.063715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.063856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.063882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.064957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.064984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.065102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.065128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.065296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.065321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.065525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.065550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.065653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.065678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.065853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.487 [2024-10-07 09:53:41.065882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.487 qpair failed and we were unable to recover it. 00:32:46.487 [2024-10-07 09:53:41.066008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.066035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.066130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.066156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.066339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.066378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.066527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.066556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.066724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.066801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.067914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.067982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.068105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.068277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.068452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.068650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.068806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.068988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.069108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.069318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.069500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.069804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.069967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.069997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.070967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.070994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.071101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.071127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.071252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.071278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.071453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.071496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.071660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.071719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.071953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.071980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.072120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.072321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.072499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.072632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.488 [2024-10-07 09:53:41.072817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.488 qpair failed and we were unable to recover it. 00:32:46.488 [2024-10-07 09:53:41.072983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.073107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.073237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.073442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.073627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.073829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.073883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.074054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.074081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.074222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.074248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.074401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.074425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.074581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.074606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.074800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.074865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.075035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.075061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.075145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.075179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.075388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.075413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.075517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.075583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.075809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.075834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.076038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.076161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.076348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.076559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.076790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.076993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.077113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.077310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.077530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.077698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.077905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.077933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.078032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.078058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.078164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.078205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.078407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.078431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.078651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.078676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.078862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.078911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.079027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.079054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.079170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.079210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.079351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.079393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.079616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.079682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.079953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.489 [2024-10-07 09:53:41.079981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.489 qpair failed and we were unable to recover it. 00:32:46.489 [2024-10-07 09:53:41.080081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.080108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.080264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.080290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.080488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.080512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.080706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.080731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.080905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.080932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.081844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.081884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.082875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.082924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.083053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.083080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.083235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.083260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.083455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.083480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.083652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.083678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.083804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.083851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.084963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.084990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.085113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.085142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.085318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.085345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.085535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.085565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.085707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.085741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.085900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.490 [2024-10-07 09:53:41.085926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.490 qpair failed and we were unable to recover it. 00:32:46.490 [2024-10-07 09:53:41.086051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.086292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.086483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.086644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.086808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.086956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.086982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.087092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.087118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.087257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.087283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.087439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.087463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.087623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.087649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.087849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.087889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.088023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.088057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.088241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.088283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.088444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.088468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.088612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.088650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.088881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.088966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.089857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.089918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.090856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.090908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.091922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.091949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.092066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.092091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.092294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.092318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.092504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.092548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.491 [2024-10-07 09:53:41.092711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.491 [2024-10-07 09:53:41.092736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.491 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.092926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.092952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.093936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.093962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.094062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.094088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.094250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.094288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.094491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.094515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.094693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.094726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.094881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.094927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.095029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.095055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.095180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.095216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.095366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.095405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.095581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.095612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.095782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.095850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.096082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.096264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.096447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.096640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.096836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.096976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.097128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.097372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.097588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.097729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.097937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.097964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.098063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.098089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.098282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.098307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.098466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.098491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.098676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.098702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.098921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.098946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.099092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.099118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.099274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.099316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.099501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.099525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.099809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.099833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.492 qpair failed and we were unable to recover it. 00:32:46.492 [2024-10-07 09:53:41.100078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.492 [2024-10-07 09:53:41.100105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.100284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.100317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.100516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.100539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.100747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.100811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.101035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.101061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.101272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.101296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.101417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.101446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.101606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.101644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.101846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.101884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.102045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.102069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.102243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.102266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.102438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.102461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.102665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.102729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.102988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.103125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.103302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.103511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.103684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.103887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.103923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.104040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.104065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.104208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.104233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.104440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.104469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.104725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.104748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.104947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.104972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.105070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.105112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.105291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.105314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.105467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.105490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.105697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.105733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.105918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.105942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.106093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.106117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.106228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.106253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.106425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.106463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.106651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.106674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.106879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.106917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.107069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.107094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.107222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.107246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.107487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.107516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.107656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.107679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.493 [2024-10-07 09:53:41.107829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.493 [2024-10-07 09:53:41.107868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.493 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.108037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.108062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.108199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.108223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.108382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.108406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.108656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.108684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.108821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.108844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.109074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.109098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.109235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.109264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.109418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.109442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.109618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.109642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.109817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.109846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.110015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.110040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.110209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.110248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.110538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.110567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.110790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.110813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.110954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.110979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.111166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.111298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.111451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.111623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.111815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.111980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.112148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.112375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.112566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.112783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.112922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.112948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.113050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.113075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.113285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.113314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.113492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.113522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.113699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.113723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.113903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.113945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.114094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.114118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.114268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.114292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.114395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.114420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.114657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.114681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.114853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.114897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.115070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.115099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.494 qpair failed and we were unable to recover it. 00:32:46.494 [2024-10-07 09:53:41.115262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.494 [2024-10-07 09:53:41.115285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.115425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.115463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.115643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.115672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.115876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.115923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.116072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.116096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.116297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.116326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.116488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.116512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.116652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.116690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.116872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.116909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.117090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.117114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.117268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.117306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.117441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.117479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.117669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.117693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.117882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.117972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.118132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.118156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.118340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.118363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.118493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.118532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.118675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.118714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.118906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.119191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.119348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.119561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.119742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.119905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.119930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.120123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.120147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.120300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.120323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.120463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.120504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.120679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.120708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.120878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.120922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.121085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.121114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.121268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.121291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.121548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.121571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.121767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.121832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.495 [2024-10-07 09:53:41.122121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.495 [2024-10-07 09:53:41.122146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.495 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.122346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.122369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.122606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.122635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.122843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.122866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.123053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.123079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.123235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.123265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.123427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.123450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.123658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.123682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.123831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.123860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.124034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.124215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.124428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.124608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.124750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.124990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.125014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.125220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.125244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.125397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.125420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.125552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.125591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.125745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.125769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.126100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.126126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.126261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.126303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.126481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.126504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.126656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.126679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.126811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.126851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.127058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.127084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.127266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.127293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.127474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.127503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.127741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.127764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.127935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.127959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.128118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.128143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.128334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.128357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.128493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.128516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.128717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.128746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.128862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.128904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.129061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.129100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.129228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.129275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.129465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.129488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.129716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.496 [2024-10-07 09:53:41.129740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.496 qpair failed and we were unable to recover it. 00:32:46.496 [2024-10-07 09:53:41.129951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.129980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.130179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.130217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.130447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.130471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.130648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.130677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.130831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.130854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.131083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.131109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.131279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.131308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.131498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.131521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.131670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.131694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.131875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.131912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.132048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.132072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.132266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.132290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.132461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.132490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.132615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.132653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.132860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.132956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.133151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.133175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.133483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.133506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.133801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.133866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.134139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.134165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.134387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.134410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.134552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.134576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.134758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.134782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.134994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.135172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.135373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.135577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.135774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.135950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.135984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.136170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.136193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.136335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.136359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.136500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.136539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.136679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.136718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.136860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.136905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.137040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.137079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.137224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.137262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.137409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.137447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.137637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.137666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.137842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.497 [2024-10-07 09:53:41.137896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.497 qpair failed and we were unable to recover it. 00:32:46.497 [2024-10-07 09:53:41.138073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.138098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.138193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.138217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.138423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.138447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.138632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.138656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.138831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.138859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.139064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.139289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.139497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.139630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.139760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.139988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.140029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.140207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.140231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.140438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.140462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.140630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.140659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.140834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.140857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.140996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.141036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.141247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.141276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.141517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.141541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.141658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.141682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.141847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.141886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.142076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.142100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.142309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.142333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.142562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.142591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.142764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.142787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.142961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.142985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.143140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.143169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.143344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.143367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.143520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.143544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.143727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.143763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.143923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.143951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.144163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.144187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.144358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.144386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.144563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.144588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.144832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.144855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.145056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.145081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.145221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.145244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.145393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.498 [2024-10-07 09:53:41.145417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.498 qpair failed and we were unable to recover it. 00:32:46.498 [2024-10-07 09:53:41.145554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.145594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.145729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.145767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.145970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.145994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.146136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.146161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.146323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.146346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.146524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.146547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.146731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.146796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.147093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.147119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.147297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.147335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.147587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.147615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.147752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.147787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.147984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.148009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.148275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.148303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.148494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.148518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.148629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.148654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.148795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.148819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.148977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.149143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.149280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.149476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.149666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.149898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.149928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.150067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.150091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.150283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.150307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.150523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.150551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.150727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.150751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.150945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.150968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.151147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.151291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.151475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.151601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.151804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.151980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.152008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.152158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.152199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.499 qpair failed and we were unable to recover it. 00:32:46.499 [2024-10-07 09:53:41.152308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.499 [2024-10-07 09:53:41.152332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.152480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.152505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.152661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.152685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.152874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.152953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.153127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.153152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.153290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.153319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.153494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.153518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.153669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.153692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.153903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.153947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.154080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.154104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.154305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.154329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.154553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.154582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.154781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.154804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.154976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.155000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.155198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.155226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.155443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.155466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.155656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.155679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.155833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.155862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.156116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.156140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.156330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.156354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.156601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.156631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.156799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.156823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.156940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.156966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.157142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.157184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.157326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.157364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.157557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.157580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.157708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.157751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.157898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.157922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.158068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.158108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.158225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.158266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.158401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.158439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.158623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.500 [2024-10-07 09:53:41.158646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.500 qpair failed and we were unable to recover it. 00:32:46.500 [2024-10-07 09:53:41.158857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.158886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.159087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.159112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.159268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.159293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.159428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.159471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.159665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.159704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.159838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.159862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.160953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.160977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.161112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.161160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.161300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.161338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.161473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.161511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.161726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.161791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.162071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.162096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.162244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.162282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.162449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.162478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.162667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.162691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.162878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.162922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.163067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.163096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.163224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.163262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.163396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.163420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.163595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.163635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.163819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.163843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.164963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.164988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.165121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.165146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.165327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.165366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.165590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.165613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.165793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.165826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.166041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.166066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.501 [2024-10-07 09:53:41.166193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.501 [2024-10-07 09:53:41.166217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.501 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.166384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.166408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.166560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.166589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.166722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.166760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.166920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.166959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.167142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.167172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.167371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.167394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.167576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.167600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.167772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.167808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.167972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.168001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.168161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.168186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.168435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.168464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.168594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.168633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.168824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.168848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.169041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.169066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.169217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.169241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.169412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.169436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.169594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.169622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.169802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.169834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.170099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.170124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.170281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.170310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.170485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.170508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.170660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.170684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.170871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.170909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.171084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.171109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.171264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.171288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.171489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.171517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.171717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.171741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.171952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.171976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.172157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.172185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.172302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.172326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.172494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.172532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.172753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.172781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.172986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.173010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.173154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.502 [2024-10-07 09:53:41.173178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.502 qpair failed and we were unable to recover it. 00:32:46.502 [2024-10-07 09:53:41.173360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.173388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.173546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.173570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.173797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.173821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.174069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.174098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.174230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.174254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.174408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.174446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.174652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.174681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.174859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.174961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.175147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.175184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.175341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.175370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.175503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.175541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.175657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.175681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.175853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.175901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.176077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.176101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.176249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.176291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.176449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.176478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.176672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.176696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.176884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.176970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.177210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.177239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.177409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.177433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.177597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.177636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.177849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.177946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.178167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.178325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.178494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.178671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.178824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.178989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.179190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.179357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.179526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.179694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.179963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.179987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.180140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.180169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.180389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.180413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.180566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.503 [2024-10-07 09:53:41.180590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.503 qpair failed and we were unable to recover it. 00:32:46.503 [2024-10-07 09:53:41.180726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.180767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.180944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.180968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.181142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.181165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.181348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.181387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.181543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.181566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.181748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.181771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.181986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.182023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.182193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.182232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.182382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.182406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.182603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.182632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.182813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.182836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.183033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.183058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.183213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.183252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.183435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.183458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.183631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.183655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.183807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.183831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.184018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.184044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.184221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.184245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.184451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.184485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.184717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.184740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.184981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.185176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.185376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.185543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.185693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.185898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.185922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.186921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.504 [2024-10-07 09:53:41.186948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.504 qpair failed and we were unable to recover it. 00:32:46.504 [2024-10-07 09:53:41.187061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.187226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.187334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.187537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.187724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.187880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.187929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.188908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.188936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.189083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.189123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.189342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.189387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.189540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.189567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.189683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.189722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.189854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.189907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.190958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.190983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.191143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.191334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.191514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.191659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.191837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.191976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.192221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.192422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.192622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.192782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.192889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.192920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.505 [2024-10-07 09:53:41.193924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.505 [2024-10-07 09:53:41.193952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.505 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.194921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.194949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.195057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.195084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.195255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.195281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.195441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.195481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.195639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.195670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.195903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.195929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.196090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.196116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.199679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.199717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.199949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.199977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.200962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.200986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.201955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.201980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.202150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.202342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.202563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.202714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.202868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.202995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.203961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.203985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.204101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.204125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.506 [2024-10-07 09:53:41.204256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.506 [2024-10-07 09:53:41.204282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.506 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.204457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.204480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.204701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.204724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.204833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.204858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.205933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.205960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.206115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.206140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.206392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.206416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.206551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.206579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.206782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.206810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.206920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.206970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.207126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.207395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.207529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.207694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.207862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.207997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.208102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.208308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.208463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.208725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.208964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.208989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.209084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.209108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.209287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.209310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.209481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.209509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.209633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.209662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.209835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.209859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.210807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.210998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.211023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.211127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.211162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.211344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.211366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.507 [2024-10-07 09:53:41.211544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.507 [2024-10-07 09:53:41.211568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.507 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.211722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.211750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.211963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.211992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.212148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.212172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.212425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.212527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.212779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.212805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.213965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.213991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.214120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.214145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.214330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.214355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.214621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.214646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.214802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.214827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.215035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.215072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.215233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.215258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.215432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.215456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.215630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.215688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.215851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.215875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.216056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.216203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.216441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.216620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.216799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.216987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.217013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.217180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.217205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.217424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.217449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.217604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.217679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.217868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.217906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.218089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.218114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.218243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.218267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.218419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.218444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.218571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.218612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.218814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.218843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.219936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.219963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.220101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.220127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.220315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.220340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.508 qpair failed and we were unable to recover it. 00:32:46.508 [2024-10-07 09:53:41.220532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.508 [2024-10-07 09:53:41.220616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.220865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.220901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.221052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.221080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.221243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.221272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.221429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.221454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.221558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.221583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.221775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.221840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.222046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.222073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.222251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.222283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.222446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.222471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.222678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.222703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.222869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.222909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.223134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.223186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.223362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.223387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.223496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.223520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.223715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.223758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.223992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.224018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.224296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.224340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.224576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.224628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.224753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.224777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.224957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.224996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.225122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.225149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.225312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.225335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.225471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.225514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.225640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.225664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.225848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.225872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.226029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.226078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.509 [2024-10-07 09:53:41.226229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.509 [2024-10-07 09:53:41.226255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.509 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.226430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.226455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.226670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.226712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.226851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.226876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.227045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.227075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.227204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.227243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.227383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.227438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.227624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.227649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.227795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.227831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.228066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.228092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.228286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.228312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.228500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.228547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.228783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.228882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.229130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.229158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.229362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.229429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.229751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.229818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.230090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.230118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.230321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.230346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.230492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.230517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.230723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.230748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.230912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.230953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.231114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.231139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.231341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.231364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.231555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.231632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.231865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.231903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.232056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.232087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.232223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.232262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.232430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.232494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.232750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.232774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.232988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.233014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.233233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.233287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.233501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.233529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.510 [2024-10-07 09:53:41.233664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.510 [2024-10-07 09:53:41.233719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.510 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.233908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.233952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.234112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.234136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.234274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.234296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.234456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.234494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.234721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.234745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.234852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.234897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.235062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.235087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.235277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.235302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.235507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.235556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.235759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.235788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.235913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.235940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.236094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.236119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.236328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.236379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.236554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.236578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.236804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.236833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.237885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.237915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.238044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.238073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.238234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.238259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.238404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.238428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.238574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.238602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.238808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.238832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.239042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.239069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.239245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.239292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.239534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.239563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.239724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.239750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.239904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.239934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.240098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.240126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.240296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.240320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.240500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.511 [2024-10-07 09:53:41.240577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.511 qpair failed and we were unable to recover it. 00:32:46.511 [2024-10-07 09:53:41.240842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.240882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.241118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.241290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.241455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.241664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.241864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.241992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.242855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.242990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.243021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.243151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.243178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.243310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.512 [2024-10-07 09:53:41.243338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.512 qpair failed and we were unable to recover it. 00:32:46.512 [2024-10-07 09:53:41.243428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.796 [2024-10-07 09:53:41.243455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.796 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.243585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.243611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.243764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.243803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.243956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.243995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.244928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.244955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.245931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.245971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.246140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.246166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.246314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.246340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.246448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.246474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.246612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.246637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.246855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.246903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.247906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.247932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.248952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.248977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.249182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.249368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.249569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.249738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.797 [2024-10-07 09:53:41.249898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.797 qpair failed and we were unable to recover it. 00:32:46.797 [2024-10-07 09:53:41.249997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.250174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.250385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.250576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.250731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.250941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.250976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.251148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.251186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.251337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.251395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.251531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.251568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.251708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.251748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.251889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.251946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.252961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.252987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.253116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.253140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.253288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.253313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.253477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.253501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.253673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.253697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.253826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.253850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.254080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.254106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.254261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.254285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.254445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.254484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.254625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.254653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.254869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.254903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.255968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.255993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.798 qpair failed and we were unable to recover it. 00:32:46.798 [2024-10-07 09:53:41.256939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.798 [2024-10-07 09:53:41.256963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.257933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.257957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.258888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.258927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.259926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.259969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.260086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.260110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.260284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.260307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.260473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.260496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.260675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.260699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.260875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.260911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.261919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.261965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.262146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.262344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.262496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.262668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.262851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.262996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.263035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.263209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.263232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.263345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.263369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.263520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.799 [2024-10-07 09:53:41.263543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.799 qpair failed and we were unable to recover it. 00:32:46.799 [2024-10-07 09:53:41.263697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.263721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.263866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.263896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.264931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.264956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.265845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.265869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.266883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.266922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.267940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.267964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.268961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.268985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.269925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.269964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.800 qpair failed and we were unable to recover it. 00:32:46.800 [2024-10-07 09:53:41.270118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.800 [2024-10-07 09:53:41.270142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.270312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.270336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.270472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.270512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.270640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.270682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.270845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.270873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.271952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.271976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.272954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.272979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.273961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.273986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.274932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.274956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.275096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.275121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.275292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.275316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.275443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.275467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.801 [2024-10-07 09:53:41.275633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.801 [2024-10-07 09:53:41.275657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.801 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.275789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.275827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.275970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.275994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.276154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.276192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.276335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.276385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.276522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.276546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.276696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.276721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.276907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.276932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.277087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.277111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.277291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.277315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.277462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.277507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.277656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.277680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.277838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.277862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.278902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.278932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.279944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.279970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.280100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.280134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.280294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.280317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.280482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.280514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.280667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.280690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.280875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.280903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.281839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.281877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.282056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.282082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.282211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.282250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.802 [2024-10-07 09:53:41.282352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.802 [2024-10-07 09:53:41.282376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.802 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.282553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.282577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.282680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.282704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.282833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.282860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.283875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.283921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.284916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.284955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.285159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.285183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.285338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.285395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.285534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.285557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.285752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.285777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.285999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.286198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.286361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.286525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.286734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.286871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.286904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.287952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.287977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.288089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.288113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.288315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.288338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.288477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.288501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.288687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.288710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.288905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.288953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.289125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.289148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.289292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.289349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.803 qpair failed and we were unable to recover it. 00:32:46.803 [2024-10-07 09:53:41.289497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.803 [2024-10-07 09:53:41.289521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.289664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.289688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.289819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.289842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.289997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.290202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.290359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.290564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.290737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.290868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.290899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.291886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.291943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.292093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.292117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.292261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.292285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.292510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.292551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.292771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.292799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.293033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.293058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.293187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.293253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.293422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.293446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.293660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.293685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.293913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.293954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.294095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.294277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.294505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.294684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.294857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.294961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.295142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.295319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.295563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.295755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.295904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.295929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.296147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.296170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.296338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.296371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.296537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.296562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.296727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.804 [2024-10-07 09:53:41.296752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.804 qpair failed and we were unable to recover it. 00:32:46.804 [2024-10-07 09:53:41.296954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.296979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.297152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.297190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.297356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.297410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.297581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.297604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.297762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.297785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.297979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.298345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.298509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.298686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.298834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.298858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.299097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.299123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.299298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.299322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.299464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.299497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.299693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.299721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.299874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.299937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.300859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.300888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.301881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.301993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.302158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.302333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.302471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.302656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.302829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.302857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.303054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.303082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.303225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.303267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.303398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.303421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.303563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.303587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.805 qpair failed and we were unable to recover it. 00:32:46.805 [2024-10-07 09:53:41.303686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.805 [2024-10-07 09:53:41.303712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.303880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.303916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.304941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.304965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.305113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.305138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.305322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.305350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.305486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.305523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.305725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.305753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.305863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.305889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.306077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.306101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.306234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.306257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.306501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.306552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.306730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.306752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.306864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.306913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.307975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.307998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.308925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.308950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.309079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.309103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.309222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.309264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.309383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.806 [2024-10-07 09:53:41.309406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.806 qpair failed and we were unable to recover it. 00:32:46.806 [2024-10-07 09:53:41.309576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.309600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.309720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.309744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.309860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.309884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.310139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.310196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.310365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.310391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.310552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.310578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.310711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.310752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.310920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.310947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.311091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.311118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.311294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.311351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.311638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.311663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.311857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.311883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.312064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.312089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.312233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.312269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.312466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.312493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.312689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.312741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.312842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.312866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.313934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.313976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.314134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.314325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.314509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.314699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.314875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.314997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.315154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.315311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.315522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.315756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.315960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.315986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.316083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.316108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.316265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.316290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.316397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.316420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.316596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.807 [2024-10-07 09:53:41.316625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.807 qpair failed and we were unable to recover it. 00:32:46.807 [2024-10-07 09:53:41.316758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.316782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.316905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.316944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.317083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.317110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.317209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.317239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.317374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.317402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.317580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.317678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.317922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.317950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.318117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.318154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.318445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.318516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.318728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.318754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.318860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.318887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.319057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.319082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.319200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.319224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.319418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.319475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.319615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.319701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.319850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.319875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.320859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.320883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.321902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.321929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.322973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.322997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.323101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.323126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.323336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.323364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.323555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.323579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.323719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.323746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.808 qpair failed and we were unable to recover it. 00:32:46.808 [2024-10-07 09:53:41.323914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.808 [2024-10-07 09:53:41.323958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.324111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.324147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.324402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.324450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.324658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.324708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.324847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.324871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.325006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.325031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.325163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.325206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.325378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.325401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.325546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.325573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.325772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.325801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.326904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.326956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.327091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.327130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.327280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.327303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.327401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.327425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.327618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.327642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.327855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.327884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.328044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.328068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.328252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.328276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.328418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.328441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.328582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.328620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.328774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.328799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.329931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.329958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.330151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.330177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.330296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.330320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.330485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.330511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.330661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.330701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.330880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.330918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.331158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.331199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.809 [2024-10-07 09:53:41.331411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.809 [2024-10-07 09:53:41.331434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.809 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.331610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.331661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.331779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.331802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.331911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.331936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.332922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.332975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.333872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.333907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.334834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.334859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.335019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.335044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.335208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.335245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.335439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.335462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.335747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.335775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.335917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.335957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.336931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.336983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.337162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.337188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.337287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.810 [2024-10-07 09:53:41.337314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.810 qpair failed and we were unable to recover it. 00:32:46.810 [2024-10-07 09:53:41.337474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.337517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.337634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.337659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.337793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.337820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.338020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.338046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.338202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.338249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.338453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.338513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.338771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.338822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.338992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.339836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.339978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.340128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.340311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.340509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.340670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.340906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.340944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.341168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.341195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.341323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.341350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.341558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.341602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.341712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.341751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.341946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.341973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.342099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.342125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.342292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.342316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.342420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.342444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.342590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.342629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.342755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.342779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.343038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.343214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.343466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.343638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.343832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.343999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.344026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.344194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.344219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.344426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.344450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.811 [2024-10-07 09:53:41.344625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.811 [2024-10-07 09:53:41.344650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.811 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.344799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.344823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.344945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.344981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.345171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.345198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.345411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.345436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.345545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.345585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.345704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.345729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.345869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.345917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.346954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.346981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.347101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.347126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.347296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.347319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.347560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.347587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.347769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.347796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.347956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.347982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.348125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.348163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.348445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.348495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.348670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.348705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.348904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.348955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.349925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.349950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.350859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.350882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.351060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.351097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.351249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.351276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.351372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.351398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.351617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.351642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.812 qpair failed and we were unable to recover it. 00:32:46.812 [2024-10-07 09:53:41.351805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.812 [2024-10-07 09:53:41.351830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.351972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.352971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.352997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.353138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.353190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.353373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.353396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.353552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.353580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.353745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.353789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.353944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.353970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.354132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.354348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.354524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.354639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.354837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.354975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.355001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.355181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.355234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.355409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.355436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.355644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.355670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.355832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.355861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.356045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.356075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.356182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.356206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.356397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.356422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.356610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.356688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.356987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.357013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.357241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.357264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.357498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.357564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.357824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.357849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.358004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.358030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.358158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.358199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.358345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.358370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.358588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.358653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.358896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.358926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.359077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.359103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.359227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.813 [2024-10-07 09:53:41.359267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.813 qpair failed and we were unable to recover it. 00:32:46.813 [2024-10-07 09:53:41.359443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.359475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.359643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.359669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.359846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.359871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.360017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.360042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.360224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.360248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.360372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.360419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.360669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.360735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.360999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.361025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.361160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.361201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.361360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.361390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.361656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.361680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.361901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.361926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.362116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.362142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.362325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.362350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.362536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.362600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.362859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.362957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.363120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.363309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.363508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.363709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.363843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.363988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.364014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.364201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.364239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.364396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.364435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.364549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.364579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.364747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.364771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.364987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.365014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.365181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.365206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.365339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.365363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.365517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.365593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.365928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.365972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.366105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.366130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.366266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.366304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.366511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.366578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.366845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.366870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.367018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.367053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.367249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.367292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.367483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.814 [2024-10-07 09:53:41.367507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.814 qpair failed and we were unable to recover it. 00:32:46.814 [2024-10-07 09:53:41.367653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.367720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.367950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.367979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.368137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.368163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.368322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.368354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.368534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.368605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.368838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.368863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.369015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.369042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.369239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.369282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.369507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.369533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.369757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.369824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.370043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.370069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.370255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.370278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.370468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.370492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.370762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.370835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.371126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.371153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.371325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.371350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.371505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.371530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.371635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.371659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.371858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.371928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.372084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.372109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.372262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.372287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.372494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.372518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.372744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.372809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.373112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.373139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.373273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.373297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.373445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.373514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.373736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.373765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.373905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.373931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.815 [2024-10-07 09:53:41.374910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.815 [2024-10-07 09:53:41.374936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.815 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.375094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.375120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.375285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.375309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.375477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.375502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.375772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.375841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.376106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.376133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.376259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.376284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.376515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.376544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.376700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.376724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.376833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.376872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.377909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.377950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.378886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.378919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.379105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.379130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.379305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.379329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.379549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.379583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.379729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.379758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.379996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.380021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.380150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.380175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.380383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.380413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.380591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.380615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.380784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.380809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.381014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.381040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.381195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.381229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.381395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.381422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.381599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.381642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.381836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.381921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.382146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.382172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.382361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.382387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.816 [2024-10-07 09:53:41.382511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.816 [2024-10-07 09:53:41.382534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.816 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.382728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.382753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.382975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.383005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.383155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.383179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.383361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.383385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.383572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.383602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.383828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.383915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.384111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.384137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.384281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.384321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.384555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.384579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.384766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.384790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.384924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.384964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.385144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.385169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.385311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.385335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.385465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.385493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.385728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.385752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.385947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.385973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.386189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.386232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.386452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.386475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.386667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.386690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.386884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.386921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.387112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.387137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.387261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.387287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.387470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.387509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.387672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.387696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.387828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.387854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.388922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.388948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.389115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.389142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.389297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.389346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.389573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.389596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.389740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.389774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.389966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.389991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.390119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.390145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.817 [2024-10-07 09:53:41.390244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.817 [2024-10-07 09:53:41.390270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.817 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.390423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.390447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.390599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.390623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.390726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.390751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.390931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.390956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.391069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.391095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.391226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.391250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.391344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.391371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.391518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.391542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.391735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.391815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.392053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.392079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.392246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.392286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.392453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.392491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.392663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.392702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.392927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.392956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.393972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.393997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.394130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.394154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.394325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.394351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.394507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.394531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.394727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.394756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.394920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.394973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.395909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.395935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.396912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.396959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.818 [2024-10-07 09:53:41.397086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.818 [2024-10-07 09:53:41.397111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.818 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.397279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.397309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.397421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.397446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.397590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.397615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.397792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.397817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.397950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.397975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.398150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.398176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.398337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.398361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.398491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.398530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.398666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.398691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.398871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.398919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.399071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.399098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.399243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.399283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.399468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.399497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.399675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.399699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.399827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.399851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.400079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.400243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.400441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.400664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.400868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.400997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.401192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.401407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.401554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.401703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.401875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.401913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.402091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.402116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.402267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.402296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.402449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.402472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.819 qpair failed and we were unable to recover it. 00:32:46.819 [2024-10-07 09:53:41.402628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.819 [2024-10-07 09:53:41.402652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.402773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.402799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.402958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.402983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.403116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.403141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.403300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.403341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.403481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.403505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.403671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.403695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.403940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.403973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.404117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.404141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.404296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.404325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.404466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.404506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.404648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.404687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.404853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.404878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.405071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.405100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.405257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.405280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.405434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.405458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.405691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.405720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.405894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.405936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.406095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.406120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.406265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.406291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.406478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.406504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.406666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.406690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.406829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.406873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.407888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.407921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.408104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.408129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.408384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.408408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.820 [2024-10-07 09:53:41.408585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.820 [2024-10-07 09:53:41.408614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.820 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.408795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.408819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.409961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.409986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.410154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.410180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.410307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.410345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.410540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.410565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.410736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.410762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.410926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.410968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.411142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.411181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.411319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.411344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.411489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.411514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.411669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.411708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.411882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.411962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.412124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.412150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.412304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.412328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.412501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.412525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.412693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.412722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.412871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.412900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.413962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.413988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.414163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.414202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.414398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.414427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.414608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.414631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.414804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.414829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.415031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.415060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.415227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.415266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.415412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.415436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.821 qpair failed and we were unable to recover it. 00:32:46.821 [2024-10-07 09:53:41.415604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.821 [2024-10-07 09:53:41.415644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.415775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.415812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.416029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.416054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.416280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.416308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.416477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.416501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.416703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.416727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.416932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.416961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.417145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.417170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.417376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.417400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.417570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.417603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.417731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.417788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.418075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.418100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.418296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.418325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.418528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.418551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.418692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.418724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.418930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.418970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.419202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.419225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.419406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.419430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.419629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.419658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.419850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.419888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.420074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.420099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.420314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.420343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.420466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.420490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.420710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.420734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.420966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.420996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.421246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.421270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.421457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.421481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.421594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.421632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.421781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.421805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.421978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.422018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.422108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.422132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.422305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.422344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.422521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.422547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.422738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.422802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.423084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.423109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.822 [2024-10-07 09:53:41.423314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.822 [2024-10-07 09:53:41.423338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.822 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.423546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.423574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.423802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.423825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.424079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.424294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.424451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.424695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.424841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.424987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.425162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.425390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.425588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.425726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.425939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.425978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.426142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.426170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.426359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.426383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.426567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.426596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.426795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.426818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.427042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.427066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.427288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.427317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.427483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.427507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.427618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.427643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.427913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.427964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.428187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.428212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.428405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.428428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.428620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.428649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.428824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.428848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.429019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.429043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.429202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.429231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.429378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.429421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.429595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.429619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.429809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.429838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.430897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.430926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.823 [2024-10-07 09:53:41.431086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.823 [2024-10-07 09:53:41.431111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.823 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.431303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.431327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.431517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.431546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.431770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.431835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.432091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.432117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.432289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.432318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.432474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.432498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.432706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.432731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.432914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.432964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.433119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.433143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.433310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.433348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.433567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.433596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.433756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.433780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.433959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.433984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.434111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.434158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.434393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.434416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.434650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.434677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.434858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.434886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.435119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.435143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.435338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.435361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.435529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.435554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.435778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.435802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.435983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.436008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.436149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.436190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.436421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.436445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.436587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.436611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.436794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.436823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.437047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.437072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.437299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.824 [2024-10-07 09:53:41.437323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.824 qpair failed and we were unable to recover it. 00:32:46.824 [2024-10-07 09:53:41.437469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.437498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.437632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.437671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.437882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.437927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.438115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.438144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.438292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.438316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.438531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.438555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.438748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.438777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.438959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.438993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.439145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.439196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.439391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.439420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.439578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.439602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.439749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.439774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.439964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.439990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.440116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.440316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.440519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.440696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.440836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.440985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.441167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.441374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.441518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.441696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.441971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.441997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.442176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.442214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.442381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.442407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.442564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.442588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.442740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.442787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.442939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.442966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.443113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.443137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.443294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.443323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.443472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.443510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.443679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.443704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.443851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.443953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.444132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.444158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.444303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.444328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.444491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.444520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.825 qpair failed and we were unable to recover it. 00:32:46.825 [2024-10-07 09:53:41.444674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.825 [2024-10-07 09:53:41.444699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.444931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.444957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.445122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.445171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.445303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.445342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.445535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.445560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.445755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.445784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.446001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.446027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.446207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.446231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.446401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.446430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.446607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.446647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.446813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.446838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.447043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.447068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.447204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.447245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.447387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.447413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.447584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.447629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.447847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.447872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.448056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.448081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.448285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.448315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.448523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.448547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.448739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.448805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.449084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.449109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.449231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.449256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.449503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.449527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.449724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.449800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.450061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.450088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.450246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.450271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.450418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.450470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.450713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.450737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.450914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.450941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.451165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.451194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.451330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.451360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.451508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.451532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.451751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.451821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.826 qpair failed and we were unable to recover it. 00:32:46.826 [2024-10-07 09:53:41.452072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.826 [2024-10-07 09:53:41.452098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.452281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.452306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.452455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.452487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.452604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.452629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.452801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.452826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.453948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.453975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.454150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.454192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.454407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.454431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.454674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.454699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.454834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.454863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.455031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.455056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.455216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.455241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.455387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.455429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.455550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.455578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.455807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.455832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.456014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.456041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.456219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.456258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.456413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.456452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.456613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.456638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.456835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.456858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.457069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.457094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.457228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.457259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.457452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.457478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.457621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.457645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.457853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.457900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.458035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.458059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.458230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.458269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.458433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.458462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.458651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.458675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.458958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.458985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.459152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.459177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.459366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.459391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.459560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.827 [2024-10-07 09:53:41.459588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.827 qpair failed and we were unable to recover it. 00:32:46.827 [2024-10-07 09:53:41.459713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.459757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.459903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.459928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.460085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.460110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.460282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.460311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.460501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.460527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.460746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.460772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.461111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.461291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.461466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.461596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.461826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.461983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.462181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.462392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.462532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.462738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.462929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.462955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.463128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.463153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.463349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.463392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.463569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.463593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.463757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.463828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.464066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.464092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.464291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.464315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.464485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.464509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.464703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.464746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.464864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.464912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.465108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.465133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.465259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.465305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.465474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.465498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.465615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.465640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.465823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.465862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.466019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.466046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.466215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.466239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.466378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.828 [2024-10-07 09:53:41.466422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.828 qpair failed and we were unable to recover it. 00:32:46.828 [2024-10-07 09:53:41.466597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.466621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.466826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.466860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.467070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.467290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.467462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.467656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.467837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.467988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.468140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.468335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.468482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.468671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.468869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.468901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.469056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.469085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.469313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.469337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.469514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.469538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.469718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.469758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.469925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.469955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.470179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.470218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.470401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.470426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.470541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.470581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.470775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.470799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.470943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.470969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.471102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.471127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.471337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.471362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.471540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.471564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.471732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.471763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.471896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.471936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.472105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.472130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.472283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.472312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.472487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.472511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.472617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.829 [2024-10-07 09:53:41.472641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.829 qpair failed and we were unable to recover it. 00:32:46.829 [2024-10-07 09:53:41.472829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.472872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.473054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.473080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.473267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.473293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.473456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.473486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.473647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.473672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.473815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.473842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.474954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.474981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.475125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.475150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.475308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.475336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.475555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.475580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.475757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.475786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.475921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.475947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.476088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.476113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.476264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.476306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.476489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.476515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.476717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.476743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.476924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.476968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.477096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.477121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.477286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.477311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.477502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.477531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.477680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.477718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.477884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.477913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.478071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.478115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.478272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.478295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.478512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.478536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.478729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.478759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.478919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.478944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.479095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.479122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.479284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.479312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.479422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.479462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.479658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.479682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.479875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.479910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.830 [2024-10-07 09:53:41.480079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.830 [2024-10-07 09:53:41.480103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.830 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.480327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.480351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.480543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.480572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.480740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.480765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.480919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.480947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.481078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.481118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.481280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.481303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.481442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.481467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.481606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.481632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.481806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.481833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.482069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.482095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.482293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.482322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.482546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.482570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.482733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.482799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.483076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.483102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.483291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.483315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.483501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.483528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.483711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.483782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.483999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.484024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.484193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.484219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.484351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.484395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.484540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.484564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.484710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.484763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.485969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.485994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.486184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.486213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.486362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.486386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.486559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.486584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.486769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.486794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.486925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.486951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.487107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.487132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.487314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.487343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.487513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.487537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.831 qpair failed and we were unable to recover it. 00:32:46.831 [2024-10-07 09:53:41.487769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.831 [2024-10-07 09:53:41.487841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.488126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.488152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.488400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.488424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.488605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.488630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.488831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.488860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.489935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.489962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.490941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.490967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.491116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.491140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.491282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.491321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.491457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.491489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.491605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.491630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.491828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.491854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.492911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.492938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.493866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.493913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.494080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.494118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.832 [2024-10-07 09:53:41.494340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.832 [2024-10-07 09:53:41.494363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.832 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.494535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.494560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.494725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.494754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.494925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.494951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.495081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.495106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.495277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.495306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.495482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.495507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.495662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.495686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.495871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.495905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.496051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.496076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.496243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.496269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.496457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.496486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.496672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.496697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.496866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.496911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.497928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.497954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.498104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.498147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.498330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.498354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.498540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.498573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.498787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.498819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.499045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.499074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.499214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.499239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.499372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.499397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.833 [2024-10-07 09:53:41.499550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.833 [2024-10-07 09:53:41.499576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.833 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.499724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.499781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.500952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.500979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.501137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.501163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.501355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.501379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.501538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.501563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.501751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.501783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.501953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.501978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.502133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.502159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.502348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.502378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.502507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.502531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.502726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.502797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.503089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.503116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.503273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.503297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.503448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.503472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.503655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.503684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.503853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.503877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.504014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.504054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.504236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.504277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.504457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.504480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.504622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.504646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.504833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.834 [2024-10-07 09:53:41.504862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.834 qpair failed and we were unable to recover it. 00:32:46.834 [2024-10-07 09:53:41.505084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.505110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.505229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.505276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.505490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.505518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.505692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.505715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.505905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.505945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.506141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.506170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.506354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.506379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.506616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.506640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.506869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.506908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.507092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.507118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.507309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.507337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.507444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.507482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.507651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.507690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.507870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.507914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.508159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.508188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.508498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.508522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.508734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.508757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.508961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.508987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.509141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.509182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.509348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.509372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.509575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.509604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.835 [2024-10-07 09:53:41.509792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.835 [2024-10-07 09:53:41.509816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.835 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.509933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.509958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.510118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.510161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.510336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.510360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.510584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.510608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.510750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.510779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.510918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.510961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.511158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.511198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.511362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.511391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.511559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.511582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.511798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.511822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.511947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.511977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.512096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.512121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.512335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.512358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.512540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.512569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.512735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.512758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.512977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.513003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.513203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.513233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.513449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.513472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.513626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.513650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.513791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.513853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.514059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.514084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.514246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.514285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.514438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.514467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.514665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.514689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.514864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.514910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.515066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.836 [2024-10-07 09:53:41.515092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.836 qpair failed and we were unable to recover it. 00:32:46.836 [2024-10-07 09:53:41.515208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.515232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.515384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.515422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.515572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.515596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.515786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.515815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.516899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.516925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.517031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.517057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.517200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.517224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.517364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.517402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.517544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.517582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.517765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.517830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.518103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.518129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.518325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.518349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.518514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.518579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.518832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.518857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.519043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.519068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.519233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.519274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.519487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.519511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.519682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.519705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.519912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.519961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.520158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.520197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.520340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.520364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.520599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.520628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.520865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.837 [2024-10-07 09:53:41.520911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.837 qpair failed and we were unable to recover it. 00:32:46.837 [2024-10-07 09:53:41.521069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.521225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.521391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.521545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.521707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.521875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.521928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.522114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.522137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.522291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.522320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.522452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.522490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.522668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.522692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.522945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.522970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.523208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.523232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.523427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.523456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.523642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.523679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.523866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.523910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.524094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.524119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.524239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.524280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.524411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.524449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.524688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.524712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.524899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.524941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.525085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.525109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.525295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.525318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.525467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.525496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.525637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.525674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.525850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.525873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.526040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.526069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.526247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.526270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.526507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.526531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.526798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.526827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.526998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.527023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.838 qpair failed and we were unable to recover it. 00:32:46.838 [2024-10-07 09:53:41.527223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.838 [2024-10-07 09:53:41.527247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.527430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.527459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.527623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.527661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.527879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.527922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.528145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.528175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.528368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.528391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.528541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.528564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.528713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.528755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.528909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.528935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.529050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.529075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.529217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.529256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.529397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.529426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.529570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.529595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.529844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.529873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.530835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.530860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.531083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.531108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.531304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.531327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.531497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.531526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.531699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.531723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.531901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.531940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.532106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.532131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.532289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.532312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.532458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.532481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.532664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.532692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.532823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.532862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.533015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.839 [2024-10-07 09:53:41.533054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.839 qpair failed and we were unable to recover it. 00:32:46.839 [2024-10-07 09:53:41.533279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.533308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.533497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.533521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.533704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.533727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.533874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.533910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.534068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.534093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.534227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.534251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.534443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.534472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.534603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.534643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.534828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.534851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.535066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.535242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.535415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.535627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.535819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.535961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.536116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.536288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.536460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.536701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.536928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.536953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.537129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.537158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.537301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.537341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.537554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.537591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.537730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.537753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.537901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.537926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.538049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.538073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.538224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.538248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.538457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.538486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.538706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.538729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.538930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.538955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.539122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.840 [2024-10-07 09:53:41.539146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.840 qpair failed and we were unable to recover it. 00:32:46.840 [2024-10-07 09:53:41.539326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.539349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.539500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.539524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.539656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.539697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.539837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.539883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.540213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.540253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.540499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.540528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.540742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.540766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.540920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.540960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.541146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.541171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.541300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.541324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.541487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.541511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.541692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.541721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.541876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.541904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.542863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.542908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.543086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.543111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.543286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.543314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.543530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.543553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.543696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.543761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.544010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.544034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.544164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.841 [2024-10-07 09:53:41.544203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.841 qpair failed and we were unable to recover it. 00:32:46.841 [2024-10-07 09:53:41.544387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.544411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.544582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.544619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.544853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.544901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.545082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.545106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.545232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.545260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.545403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.545442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.545620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.545643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.545879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.545929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.546129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.546153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.546291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.546330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.546535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.546574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.546734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.546798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.547054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.547079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.547239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.547268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.547396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.547435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.547617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.547640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.547814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.547851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.548027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.548052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.548172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.548211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.548364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.548388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.548532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.548571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.548758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.548781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.549027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.549057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.549213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.549237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.549416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.549439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.549666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.549694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.549833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.549856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.550046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.550077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.550296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.550325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.550545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.550569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.550827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.550905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.842 qpair failed and we were unable to recover it. 00:32:46.842 [2024-10-07 09:53:41.551129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.842 [2024-10-07 09:53:41.551154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.551289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.551313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.551448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.551487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.551673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.551702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.551865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.551888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.552077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.552101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.552226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.552263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.552468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.552491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.552661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.552684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.552945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.552970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.553133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.553157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.553365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.553389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.553536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.553565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.553784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.553812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.554030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.554055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.554190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.554219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.554351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.554390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.554583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.554606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.554829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.554857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.555093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.555119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.555259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.555282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.555415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.555679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.555703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.555933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.555958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.556055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.556084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.556218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.556242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.556490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.556514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.556699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.556728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.556904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.556929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.557087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.557112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.557238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.557277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.557523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.557548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.557749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.557773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.557994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.558024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.558206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.558230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.558419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.558442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.558595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.558624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.558793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.558822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.559084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.843 [2024-10-07 09:53:41.559109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.843 qpair failed and we were unable to recover it. 00:32:46.843 [2024-10-07 09:53:41.559278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.559306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.559492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.559516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.559664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.559688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.559816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.559840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.559999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.560023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.560269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.560307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.560507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.560536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.560765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.560789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.560956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.560981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.561182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.561205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.561364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.561388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.561530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.561568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.561764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.561828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.562139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.562165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.562343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.562371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.562565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.562600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.562822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.562887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.563167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.563207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.563400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.563429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.563538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.563563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.563732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.563756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.563915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.563940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.564083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.564108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.564249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.564287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.564472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.564501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.564655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.564678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.564861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.564885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.565069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.565098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.565261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.565285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.565425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.565463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.565594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.565633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.565811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.565849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.566963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.566988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.567185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.567209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.567443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.567472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.567614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.567638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.567854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.567934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.568167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.844 [2024-10-07 09:53:41.568204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.844 qpair failed and we were unable to recover it. 00:32:46.844 [2024-10-07 09:53:41.568453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.568477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.568679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.568703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.568858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.568944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.569112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.569148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.569374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.569398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.569649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.569678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.569851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.569875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.570954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.570979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.571162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.571197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.571385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.571414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.571618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.571641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.571782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.571806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.571989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.572017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.572252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.572275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.572460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.572494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.572693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.572722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.572920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.572970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.573144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.573169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.573327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.573355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.573520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.573544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.573784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.573808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.574053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.574078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.574224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.574247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.574446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.574470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.574644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.574673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.574901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.574926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.575077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.575101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.575268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.575292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.575454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.575477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.575658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.575681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.575944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.575985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.576157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.576181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.576278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.576303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.576454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.576478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.576654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.576679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.576936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.576961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.845 qpair failed and we were unable to recover it. 00:32:46.845 [2024-10-07 09:53:41.577164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.845 [2024-10-07 09:53:41.577192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.577335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.577359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.577463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.577487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.577643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.577667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.577853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.577876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.578926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.578953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.579125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.579149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.579338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.579367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.579504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.579529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.579715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.579780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.580104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.580130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.580279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.580302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.580525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.580548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.580727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.580755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.580975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.581000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.581209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.581233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.581436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.581465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.581660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.581684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.581861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.581884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.582149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.582178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.582431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.582456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.582637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.582662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.582919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.582945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.583128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.583154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.846 qpair failed and we were unable to recover it. 00:32:46.846 [2024-10-07 09:53:41.583323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.846 [2024-10-07 09:53:41.583361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.583566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.583594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.583744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.583767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.584011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.584036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.584157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.584199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.584403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.584431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.584693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.584732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.584929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.584958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.585206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.585245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.585468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.585491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.585693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.585721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.585944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.585969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.586168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.586207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.586397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.586422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:46.847 [2024-10-07 09:53:41.586552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.847 [2024-10-07 09:53:41.586592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:46.847 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.586775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.586804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.587898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.587929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.588073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.588099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.588266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.588291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.588491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.588517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.588697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.588722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.588903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.588947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.589145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.589171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.589385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.589411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.589543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.589568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.589709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.589734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.589865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.589906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.590050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.590075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.590219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.590259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.590438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.590463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.590638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.590663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.590864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.590888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.591138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.591167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.591417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.591440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.591578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.591602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.591796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.591821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.591951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.591976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.592134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.592159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.592305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.592345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.592505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.592530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.592776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.592841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.593142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.593168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.593275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-10-07 09:53:41.593299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.131 qpair failed and we were unable to recover it. 00:32:47.131 [2024-10-07 09:53:41.593489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.593514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.593663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.593691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.593828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.593867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.594116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.594142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.594302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.594331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.594542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.594566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.594763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.594787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.595970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.595996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.596200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.596234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.596399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.596423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.596673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.596697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.596907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.596935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.597082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.597108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.597344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.597368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.597510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.597539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.597644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.597667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.597821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.597845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.598084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.598109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.598279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.598303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.598478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.598501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.598699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.598727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.598945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.598971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.599106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.599131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.599342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.599371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.599536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.599559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.599749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.599775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.599943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.599985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.600159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.600199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.600408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.600432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.600578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.600606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.600834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.600857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.601031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.601055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.601187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.601212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.132 qpair failed and we were unable to recover it. 00:32:47.132 [2024-10-07 09:53:41.601409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-10-07 09:53:41.601433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.601675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.601699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.601939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.601969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.602212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.602236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.602473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.602497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.602678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.602707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.602836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.602874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.603958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.603983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.604171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.604209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.604440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.604468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.604648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.604675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.604830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.604907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.605146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.605171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.605350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.605373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.605564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.605591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.605841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.605870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.606045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.606071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.606226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.606264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.606441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.606471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.606644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.606667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.606928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.606954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.607132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.607161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.607311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.607334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.607564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.607588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.607780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.607809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.607942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.607966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.608101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.608125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.608295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.608339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.608504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.608527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.608742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.608765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.608979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.609008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.609170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.609209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.609380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.609404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.133 qpair failed and we were unable to recover it. 00:32:47.133 [2024-10-07 09:53:41.609593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.133 [2024-10-07 09:53:41.609622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.609831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.609855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.610113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.610138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.610272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.610300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.610516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.610540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.610680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.610704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.610887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.610917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.611059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.611084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.611260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.611285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.611508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.611536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.611665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.611733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.612036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.612061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.612241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.612280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.612475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.612498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.612647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.612671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.612853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.612882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.613129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.613154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.613293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.613320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.613491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.613515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.613680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.613704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.613876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.613918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.614013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.614054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.614198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.614235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.614406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.614430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.614608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.614637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.614799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.614822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.615903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.615929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.616071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.616097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.616242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.616282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.616459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.616482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.616665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.616689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.616866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.616954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.617131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.617157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.134 qpair failed and we were unable to recover it. 00:32:47.134 [2024-10-07 09:53:41.617343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.134 [2024-10-07 09:53:41.617367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.617538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.617566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.617721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.617791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.618080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.618105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.618214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.618243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.618405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.618443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.618618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.618641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.618840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.618869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.619076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.619100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.619280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.619304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.619498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.619527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.619718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.619741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.619885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.619914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.620140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.620169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.620297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.620321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.620454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.620478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.620629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.620668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.620856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.620900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.621083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.621108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.621234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.621274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.621411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.621449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.621628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.621652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.621755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.621778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.622059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.622085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.622303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.622326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.622548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.622576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.622761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.622784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.622953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.622977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.623135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.623174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.623293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.623341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.623537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.623560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.623727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.135 [2024-10-07 09:53:41.623756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.135 qpair failed and we were unable to recover it. 00:32:47.135 [2024-10-07 09:53:41.623933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.623958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.624115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.624138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.624279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.624314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.624503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.624527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.624728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.624752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.624993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.625018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.625178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.625216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.625506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.625529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.625726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.625755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.625916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.625941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.626925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.626963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.627141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.627164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.627271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.627312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.627444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.627468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.627564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.627589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.627734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.627759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.628006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.628030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.628219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.628250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.628479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.628508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.628735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.628758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.628945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.628969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.629099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.629123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.629262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.629301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.629476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.629501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.629622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.629647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.629829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.629852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.630016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.630050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.630169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.630217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.630392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.630415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.630623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.630646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.630814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.630842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.631015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.631041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.631208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.631247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.136 qpair failed and we were unable to recover it. 00:32:47.136 [2024-10-07 09:53:41.631384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.136 [2024-10-07 09:53:41.631413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.631518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.631542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.631683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.631708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.631941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.631988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.632182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.632206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.632333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.632357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.632545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.632576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.632758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.632783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.632982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.633007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.633164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.633193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.633323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.633362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.633571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.633594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.633785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.633813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.634102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.634127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.634330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.634353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.634464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.634504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.634714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.634742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.634924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.634949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.635059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.635241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.635483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.635661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.635799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.635973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.636132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.636382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.636529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.636682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.636837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.636866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.637018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.637044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.637144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.637184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.637360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.637398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.637619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.637642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.637832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.637861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.638025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.638051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.638221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.638244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.638358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.638398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.638610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.638648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.137 [2024-10-07 09:53:41.638911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.137 [2024-10-07 09:53:41.638956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.137 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.639925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.639963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.640128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.640157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.640377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.640401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.640577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.640600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.640877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.640950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.641146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.641171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.641378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.641401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.641608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.641637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.641769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.641807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.641949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.641989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.642165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.642193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.642345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.642369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.642518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.642560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.642780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.642845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.643109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.643134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.643348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.643371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.643562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.643590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.643790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.643813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.644001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.644026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.644191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.644219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.644398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.644421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.644610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.644634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.644792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.644857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.645084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.645109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.645260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.645283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.645497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.645526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.645740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.645764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.645909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.645947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.646128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.646156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.646327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.646351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.646481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.646519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.646660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.646707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.138 [2024-10-07 09:53:41.646978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.138 [2024-10-07 09:53:41.647003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.138 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.647181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.647207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.647354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.647383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.647535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.647558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.647740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.647763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.647918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.647961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.648136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.648162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.648312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.648351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.648591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.648619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.648785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.648809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.649025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.649049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.649229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.649258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.649454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.649479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.649658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.649681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.649927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.649957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.650184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.650223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.650421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.650444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.650616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.650645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.650874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.650949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.651147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.651171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.651292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.651320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.651565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.651603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.651801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.651824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.651961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.652001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.652182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.652206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.652427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.652451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.652665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.652693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.652854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.652877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.653050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.653083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.653248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.653290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.653487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.653510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.653616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.653654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.653905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.653935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.654052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.654076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.654217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.654242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.654455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.654484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.654616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.654639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.654777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.654801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.655039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.139 [2024-10-07 09:53:41.655063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.139 qpair failed and we were unable to recover it. 00:32:47.139 [2024-10-07 09:53:41.655225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.655252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.655480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.655504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.655679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.655708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.655871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.655900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.656103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.656127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.656252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.656276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.656531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.656554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.656749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.656813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.657115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.657141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.657317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.657341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.657483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.657507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.657725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.657753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.657952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.657988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.658146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.658170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.658283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.658326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.658551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.658575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.658812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.658835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.659022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.659055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.659243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.659266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.659416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.659455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.659605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.659648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.659779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.659810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.660335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.660387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.660564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.660593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.660724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.660751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.660870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.660901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.661063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.661258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.661396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.661553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.661734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.661976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.662001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.662087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.140 [2024-10-07 09:53:41.662111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.140 qpair failed and we were unable to recover it. 00:32:47.140 [2024-10-07 09:53:41.662284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.662309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.662562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.662588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.662806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.662835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.662973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.662999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.663169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.663210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.663357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.663386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.663624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.663650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.663779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.663842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.664099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.664125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.664374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.664398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.664575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.664600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.664743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.664769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.664912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.664939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.665031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.665057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.665199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.665224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.665474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.665499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.665672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.665697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.665861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.665888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.666900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.666941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.667089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.667115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.667510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.667583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.667906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.667963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.668107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.668134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.668278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.668306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.668460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.668498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.668676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.668715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.668867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.668898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.669049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.669091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.669291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.669314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.669552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.669576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.669817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.669846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.670000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.670026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.670157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.141 [2024-10-07 09:53:41.670183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.141 qpair failed and we were unable to recover it. 00:32:47.141 [2024-10-07 09:53:41.670396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.670425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.670608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.670640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.670838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.670941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.671100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.671126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.671249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.671287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.671463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.671487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.671627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.671668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.671865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.671913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.672081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.672107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.672345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.672374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.672556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.672580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.672774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.672841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.673096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.673122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.673314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.673338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.673546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.673569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.673769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.673833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.674077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.674104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.674284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.674323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.674468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.674496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.674651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.674675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.674857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.674880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.675075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.675104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.675307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.675331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.675556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.675579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.675792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.675857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.676083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.676109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.676265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.676289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.676524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.676553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.676743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.676766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.676983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.677009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.677150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.677183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.677339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.677363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.677599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.677622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.677880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.677964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.678100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.678125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.678355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.678378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.678554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.678620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.678951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.142 [2024-10-07 09:53:41.678977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.142 qpair failed and we were unable to recover it. 00:32:47.142 [2024-10-07 09:53:41.679109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.679135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.679296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.679338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.679495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.679519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.679740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.679763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.679977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.680120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.680306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.680483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.680685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.680850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.680888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.681066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.681092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.681256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.681280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.681461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.681484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.681674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.681703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.681917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.681944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.682103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.682128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.682306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.682346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.682506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.682530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.682662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.682700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.682846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.682886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.683071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.683097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.683305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.683328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.683471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.683500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.683680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.683704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.683853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.683877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.684027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.684071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.684202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.684241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.684452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.684476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.684625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.684654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.684840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.684914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.685136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.685162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.685354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.685382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.685543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.685572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.685747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.685770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.685936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.685962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.686121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.686147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.686353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.686377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.686589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.143 [2024-10-07 09:53:41.686618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.143 qpair failed and we were unable to recover it. 00:32:47.143 [2024-10-07 09:53:41.686720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.686744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.686875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.686920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.687077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.687103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.687238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.687277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.687504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.687527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.687755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.687783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.687949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.687986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.688140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.688174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.688324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.688353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.688476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.688515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.688664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.688704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.688819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.688843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.689078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.689104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.689282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.689306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.689497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.689527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.689680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.689705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.689897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.689926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.690025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.690051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.690300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.690331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.690584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.690609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.690783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.690813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.691903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.691930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.692050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.692077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.692279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.692304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.692437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.692466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.692643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.692667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.692911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.144 [2024-10-07 09:53:41.692969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.144 qpair failed and we were unable to recover it. 00:32:47.144 [2024-10-07 09:53:41.693129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.693155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.693297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.693325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.693521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.693549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.693741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.693771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.693948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.693975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.694104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.694131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.694311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.694340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.694480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.694503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.694685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.694709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.694883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.694922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.695044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.695071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.695213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.695239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.695446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.695487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.695597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.695622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.695865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.695912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.696029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.696058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.696239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.696278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.696496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.696521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.696674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.696704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.696882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.696949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.697181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.697378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.697560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.697724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.697842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.697980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.698008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.698224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.698249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.698393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.698425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.698599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.698623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.698731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.698792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.699126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.699153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.699358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.699383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.699586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.699610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.699814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.699874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.700066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.700092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.700222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.700247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.700391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.700417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.700606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.700630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.145 qpair failed and we were unable to recover it. 00:32:47.145 [2024-10-07 09:53:41.700803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.145 [2024-10-07 09:53:41.700828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.700961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.701005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.701216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.701257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.701465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.701489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.701643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.701680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.701865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.701912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.702015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.702042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.702295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.702319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.702562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.702586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.702825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.702874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.703077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.703106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.703269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.703308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.703410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.703450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.703571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.703595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.703815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.703840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.704031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.704059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.704261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.704290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.704462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.704488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.704647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.704673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.704865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.704901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.705076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.705261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.705457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.705612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.705762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.705975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.706196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.706378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.706544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.706702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.706957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.706985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.707172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.707202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.707387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.707422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.707635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.707660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.707819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.707849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.708091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.708118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.708361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.708386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.708577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.708606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.146 [2024-10-07 09:53:41.708847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.146 [2024-10-07 09:53:41.708888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.146 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.709085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.709111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.709223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.709263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.709412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.709451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.709561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.709596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.709785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.709825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.710023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.710053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.710243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.710268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.710431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.710460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.710627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.710666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.710822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.710904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.711089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.711120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.711258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.711299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.711465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.711489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.711632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.711674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.711820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.711858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.712913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.712958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.713156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.713196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.713369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.713398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.713555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.713579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.713749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.713819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.714103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.714130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.714294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.714318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.714523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.714547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.714761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.714790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.714922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.714951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.715151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.715192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.715401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.715431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.715594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.715618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.715726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.715750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.715935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.715986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.716112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.716138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.716287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.716312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.716493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.147 [2024-10-07 09:53:41.716518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.147 qpair failed and we were unable to recover it. 00:32:47.147 [2024-10-07 09:53:41.716685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.716709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.716814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.716841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.717126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.717272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.717409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.717597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.717805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.717983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.718140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.718329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.718501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.718728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.718965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.718992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.719155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.719197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.719355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.719383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.719615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.719639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.719817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.719841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.720953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.720994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.721140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.721169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.721298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.721336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.721523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.721547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.721708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.721737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.721923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.721950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.722063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.722089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.722226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.722265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.722456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.722480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.722613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.722637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.722806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.722845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.723049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.723074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.723252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.723292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.723402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.723444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.723565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.723592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.723847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.148 [2024-10-07 09:53:41.723887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.148 qpair failed and we were unable to recover it. 00:32:47.148 [2024-10-07 09:53:41.724033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.724059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.724208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.724248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.724424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.724447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.724674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.724704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.724949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.724976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.725135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.725161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.725345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.725374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.725534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.725558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.725697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.725721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.725884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.725916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.726095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.726121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.726258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.726283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.726459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.726484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.726639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.726705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.726916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.726942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.727103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.727128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.727298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.727328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.727486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.727510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.727681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.727706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.727887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.727935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.728121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.728146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.728334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.728374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.728555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.728585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.728716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.728755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.728926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.728952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.729093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.729117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.729286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.729310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.729490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.729514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.729673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.729703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.729898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.729938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.730085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.730111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.730216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.730241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.730388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.730426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.149 [2024-10-07 09:53:41.730551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.149 [2024-10-07 09:53:41.730576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.149 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.730755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.730813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.731026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.731229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.731395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.731580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.731735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.731983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.732156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.732316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.732468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.732669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.732850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.732878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.733850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.733888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.734887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.734933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.735855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.735899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.736042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.736068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.736150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.736175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.736353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.736383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.736562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.736586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.736764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.736841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.737102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.737128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.737288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.737317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.150 [2024-10-07 09:53:41.737468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.150 [2024-10-07 09:53:41.737492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.150 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.737606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.737646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.737784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.737810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.737999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.738040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.738192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.738216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.738386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.738432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.738559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.738583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.738755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.738822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.739076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.739103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.739201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.739226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.739436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.739465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.739598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.739627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.739777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.739806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.740091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.740269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.740494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.740672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.740833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.740985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.741155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.741306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.741551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.741736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.741942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.741974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.742124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.742149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.742313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.742337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.742519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.742548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.742712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.742742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.742900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.742933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.743916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.743957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.744189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.744219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.744391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.744417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.744597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.744621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.151 [2024-10-07 09:53:41.744795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.151 [2024-10-07 09:53:41.744825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.151 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.744975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.745007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.745122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.745146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.745305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.745330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.745473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.745513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.745574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121d5f0 (9): Bad file descriptor 00:32:47.152 [2024-10-07 09:53:41.745820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.745857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.746053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.746234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.746464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.746606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.746784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.746976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.747188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.747360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.747531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.747764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.747949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.747975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.748901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.748942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.749071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.749095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.749293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.749316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.749516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.749545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.749746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.749810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.750096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.750121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.750231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.750274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.750450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.750504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.750765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.750831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.751068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.751094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.751312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.751336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.751486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.751514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.751647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.751689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.751863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.751896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.752110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.752135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.752278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.752318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.152 [2024-10-07 09:53:41.752527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.152 [2024-10-07 09:53:41.752594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.152 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.752825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.752849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.753061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.753087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.753259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.753282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.753427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.753450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.753643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.753672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.753852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.753950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.754116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.754142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.754293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.754333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.754524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.754548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.754693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.754729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.754966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.754992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.755158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.755196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.755426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.755450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.755710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.755739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.755902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.755956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.756188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.756214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.756392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.756416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.756618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.756677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.756958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.756984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.757155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.757195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.757422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.757445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.757667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.757692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.757872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.757911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.758051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.758076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.758273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.758298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.758455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.758495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.758628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.758671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.758970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.758997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.759139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.759164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.759330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.759364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.759586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.759611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.759804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.759834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.759972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.759999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.153 qpair failed and we were unable to recover it. 00:32:47.153 [2024-10-07 09:53:41.760129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.153 [2024-10-07 09:53:41.760154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.760293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.760340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.760564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.760589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.760786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.760811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.761013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.761043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.761275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.761299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.761492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.761516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.761719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.761748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.761944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.761970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.762196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.762221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.762350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.762380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.762552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.762577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.762717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.762756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.762912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.762937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.763060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.763085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.763285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.763309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.763456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.763501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.763617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.763666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.763870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.763914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.764099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.764124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.764307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.764331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.764436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.764475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.764648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.764693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.764877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.764962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.765177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.765203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.765330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.765358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.765466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.765491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.765693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.765717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.765945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.765971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.766168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.766207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.766309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.766333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.766485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.766510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.766629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.766673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.766845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.766869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.767030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.767065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.767258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.767282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.767392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.767416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.767594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.767634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.154 [2024-10-07 09:53:41.767763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.154 [2024-10-07 09:53:41.767804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.154 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.767939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.767979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.768104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.768144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.768405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.768432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.768578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.768617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.768797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.768826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.769033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.769059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.769251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.769276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.769428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.769469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.769703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.769727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.769870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.769918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.770012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.770037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.770245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.770268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.770508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.770533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.770694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.770724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.770918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.770958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.771191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.771216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.771387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.771411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.771565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.771600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.771806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.771871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.772140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.772167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.772416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.772440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.772565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.772590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.772826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.772868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.773078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.773104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.773274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.773313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.773505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.773534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.773722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.773788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.774052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.774090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.774257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.774286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.774465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.774493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.774668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.774692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.774874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.774955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.775169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.775194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.775336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.775377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.775560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.775589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.775822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.775846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.776069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.776094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.155 [2024-10-07 09:53:41.776262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.155 [2024-10-07 09:53:41.776290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.155 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.776463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.776487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.776694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.776717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.776857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.776886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.777027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.777055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.777256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.777279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.777475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.777504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.777666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.777706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.777907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.777932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.778114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.778144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.778288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.778313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.778428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.778469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.778626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.778651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.778878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.778968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.779120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.779287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.779435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.779654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.779847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.779999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.780027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.780244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.780283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.780481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.780511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.780664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.780689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.780810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.780848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.781963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.781989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.782120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.782149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.782299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.782337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.782455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.782486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.782656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.782700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.782843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.782883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.783030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.783056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.783179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.783204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.783414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.783438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.783649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.156 [2024-10-07 09:53:41.783675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.156 qpair failed and we were unable to recover it. 00:32:47.156 [2024-10-07 09:53:41.783870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.783905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.784107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.784132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.784341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.784365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.784526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.784556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.784760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.784784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.785867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.785917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.786103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.786129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.786229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.786254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.786439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.786462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.786644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.786669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.786875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.786974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.787139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.787165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.787321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.787346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.787535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.787566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.787782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.787806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.787980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.788181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.788330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.788553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.788775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.788957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.788982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.789176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.789216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.789420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.789449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.789588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.789627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.789828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.789852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.790040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.790065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.790219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.790244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.790418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.790442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.790616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.790660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.790823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.790848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.791008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.791048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.791174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.791217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.791377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.791402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.157 qpair failed and we were unable to recover it. 00:32:47.157 [2024-10-07 09:53:41.791598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.157 [2024-10-07 09:53:41.791637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.791790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.791820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.791961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.791987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.792192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.792230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.792368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.792408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.792591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.792615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.792726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.792765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.792996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.793021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.793214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.793256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.793452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.793477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.793595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.793647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.793866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.793911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.794960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.794986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.795128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.795154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.795375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.795404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.795565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.795589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.795738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.795762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.795929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.795968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.796129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.796153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.796353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.796377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.796608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.796637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.796798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.796822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.797055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.797082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.797287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.797316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.797461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.797485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.797629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.797668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.797847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.797871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.798037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.798062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.798233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.158 [2024-10-07 09:53:41.798258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.158 qpair failed and we were unable to recover it. 00:32:47.158 [2024-10-07 09:53:41.798428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.798453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.798648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.798677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.798906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.798932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.799107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.799132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.799358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.799381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.799525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.799549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.799731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.799760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.799905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.799945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.800149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.800174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.800355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.800384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.800520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.800545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.800714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.800738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.800947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.800977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.801144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.801191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.801399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.801423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.801600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.801644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.801755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.801780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.801978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.802004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.802166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.802206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.802394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.802418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.802606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.802631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.802830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.802859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.803070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.803095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.803232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.803272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.803472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.803502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.803674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.803698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.803819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.803844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.804912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.804937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.805114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.805140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.805291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.805315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.805488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.805513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.805675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.805699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.805940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.159 [2024-10-07 09:53:41.805966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.159 qpair failed and we were unable to recover it. 00:32:47.159 [2024-10-07 09:53:41.806104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.806129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.806260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.806303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.806425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.806466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.806636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.806665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.806821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.806845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.807190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.807214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.807430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.807465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.807633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.807666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.807856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.807880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.808061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.808087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.808220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.808245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.808387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.808425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.808549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.808573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.808768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.808811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.809939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.809965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.810105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.810130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.810347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.810372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.810530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.810559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.810776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.810800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.810992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.811017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.811144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.811168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.811390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.811414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.811569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.811593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.811772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.811801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.811976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.812002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.812213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.812238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.812402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.812431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.812638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.812662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.812931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.812957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.813155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.813184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.813329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.813353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.813497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.813522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.813705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.813729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.160 [2024-10-07 09:53:41.813912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.160 [2024-10-07 09:53:41.813942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.160 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.814086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.814112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.814249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.814274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.814427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.814451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.814615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.814654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.814864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.814911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.815060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.815094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.815281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.815305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.815482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.815512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.815681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.815705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.815820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.815845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.816003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.816028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.816240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.816265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.816450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.816474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.816696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.816728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.816900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.816924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.817124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.817150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.817248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.817295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.817505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.817529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.817709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.817734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.817925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.817951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.818107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.818133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.818310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.818333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.818549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.818578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.818683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.818708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.818862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.818887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.819039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.819064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.819240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.819265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.819426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.819450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.819666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.819695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.819877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.819975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.820112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.820137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.820303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.820328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.820565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.820589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.820831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.820855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.821010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.821035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.821142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.821169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.821314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.821339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.161 [2024-10-07 09:53:41.821492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.161 [2024-10-07 09:53:41.821516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.161 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.821669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.821707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.821853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.821899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.822973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.822999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.823972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.823998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.824136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.824166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.824355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.824379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.824533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.824556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.824797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.824826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.824986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.825012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.825193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.825218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.825362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.825392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.825531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.825570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.825766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.825792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.825991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.826220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.826430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.826633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.826798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.826970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.826996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.827123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.827148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.827379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.827403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.827548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.827573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.827724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.827766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.827901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.827926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.828071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.162 [2024-10-07 09:53:41.828098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.162 qpair failed and we were unable to recover it. 00:32:47.162 [2024-10-07 09:53:41.828242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.828283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.828475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.828499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.828670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.828709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.828917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.828965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.829132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.829156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.829314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.829338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.829467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.829511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.829654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.829693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.829837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.829862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.830013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.830041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.830183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.830208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.830356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.830387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.830624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.830654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.830812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.830836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.831914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.831963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.832119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.832146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.832366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.832395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.832531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.832570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.832692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.832717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.832834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.832859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.833068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.833093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.833285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.833310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.833495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.833525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.833725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.833749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.833932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.833972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.834130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.834159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.834266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.834305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.834515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.834538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.834716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.834792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.835041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.835067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.835246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.835271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.835444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.835473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.835642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.835665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.835863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.835888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.836103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.836138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.836319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.836345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.163 qpair failed and we were unable to recover it. 00:32:47.163 [2024-10-07 09:53:41.836514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.163 [2024-10-07 09:53:41.836538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.836687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.836766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.836984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.837166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.837336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.837484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.837655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.837835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.837926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.838120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.838274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.838492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.838652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.838832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.838982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.839167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.839327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.839531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.839668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.839829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.839855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.840966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.840994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.841145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.841173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.841345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.841369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.841570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.841594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.841759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.841788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.841940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.841966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.842112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.842139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.842265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.842290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.842515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.842541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.842653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.842678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.842791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.842833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.843027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.843197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.843448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.843642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.843824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.843977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.844022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.844260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.164 [2024-10-07 09:53:41.844284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.164 qpair failed and we were unable to recover it. 00:32:47.164 [2024-10-07 09:53:41.844476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.844501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.844700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.844729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.844875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.844965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.845116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.845142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.845285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.845310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.845549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.845573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.845734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.845761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.845949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.845975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.846107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.846136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.846286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.846310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.846438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.846463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.846670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.846694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.846870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.846941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.847144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.847176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.847341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.847365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.847557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.847582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.847770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.847799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.847997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.848202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.848378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.848554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.848741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.848922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.848948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.849053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.849079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.849214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.849240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.849430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.849471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.849686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.849710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.849847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.849886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.850058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.850088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.850246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.850270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.850482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.850506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.850718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.850747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.850922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.850947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.851970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.851995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.852143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.852185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.852370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.852413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.165 [2024-10-07 09:53:41.852587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.165 [2024-10-07 09:53:41.852656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.165 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.852938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.852965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.853123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.853147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.853335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.853403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.853601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.853626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.853769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.853810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.853941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.853965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.854100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.854130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.854271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.854296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.854418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.854442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.854710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.854737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.854937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.854963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.855161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.855199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.855348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.855373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.855511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.855536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.855725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.855753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.855959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.855985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.856078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.856104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.856230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.856255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.856427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.856451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.856607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.856637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.856784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.856813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.857088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.857114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.857292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.857316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.857423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.857500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.857785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.857812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.858954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.858980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.859085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.859112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.859261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.859301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.859481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.859511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.859724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.859753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.859875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.859921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.860049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.860075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.860240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.860264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.860524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.860589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.860855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.860880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.166 [2024-10-07 09:53:41.861067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.166 [2024-10-07 09:53:41.861101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.166 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.861235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.861267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.861501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.861525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.861680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.861719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.861882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.861914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.862113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.862139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.862342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.862372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.862550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.862620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.862924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.862977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.863069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.863095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.863261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.863299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.863423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.863446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.863586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.863611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.863785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.863850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.864087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.864113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.864354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.864379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.864571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.864636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.864917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.864953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.865117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.865304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.865438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.865615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.865790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.865990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.866119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.866250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.866416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.866620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.866800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.866876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.867075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.867260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.867463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.867667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.867858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.867978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.868004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.868144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.868183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.868293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.868341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.868458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.868499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.868686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.167 [2024-10-07 09:53:41.868719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.167 qpair failed and we were unable to recover it. 00:32:47.167 [2024-10-07 09:53:41.868915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.868945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.869089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.869114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.869259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.869284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.869419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.869444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.869616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.869699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.869985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.870190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.870382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.870593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.870731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.870926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.870951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.871121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.871147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.871323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.871352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.871469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.871509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.871686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.871710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.871919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.871959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.872120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.872146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.872315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.872354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.872580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.872609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.872799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.872873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.873098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.873124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.873342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.873374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.873585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.873651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.873924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.873976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.874111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.874136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.874245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.874270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.874413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.874452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.874620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.874660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.874820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.874848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.875132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.875158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.875332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.875361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.875572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.875596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.875768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.875793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.875977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.876125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.876356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.876551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.876808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.876961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.876987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.877081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.877106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.877263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.877302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.877438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.877462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.877654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.877693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.877916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.877942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.168 qpair failed and we were unable to recover it. 00:32:47.168 [2024-10-07 09:53:41.878122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.168 [2024-10-07 09:53:41.878148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.878341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.878371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.878538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.878562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.878714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.878738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.878932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.878963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.879129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.879153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.879295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.879334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.879531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.879564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.879717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.879788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.880073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.880099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.880230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.880259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.880439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.880462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.880615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.880639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.880832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.880864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.881059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.881085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.881219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.881246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.881465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.881494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.881714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.881739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.881913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.881941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.882102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.882127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.882370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.882394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.882620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.882644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.882780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.882809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.882948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.882973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.883166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.883191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.883374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.883403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.883593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.883619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.883826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.883853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.883992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.884864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.884889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.885065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.885089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.885304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.885329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.885469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.885509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.885727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.885751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.885946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.885970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.886098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.886123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.886247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.886271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.886477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.886501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.886639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.169 [2024-10-07 09:53:41.886682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.169 qpair failed and we were unable to recover it. 00:32:47.169 [2024-10-07 09:53:41.886814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.886838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.886991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.887123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.887335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.887548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.887706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.887905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.887930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.888036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.888076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.888245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.888276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.888479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.888503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.888640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.888665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.888838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.888884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.889108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.889132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.889243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.889267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.889473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.889502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.889661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.889686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.889854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.889901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.890919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.890959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.891092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.891118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.891290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.891313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.891483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.891506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.891723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.891759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.891904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.891944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.892120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.892148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.892262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.892303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.892464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.892488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.892626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.892664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.892809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.892850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.893914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.893956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.894103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.894132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.894277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.894303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.894451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.894488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.894707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.894731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.170 [2024-10-07 09:53:41.894875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.170 [2024-10-07 09:53:41.894918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.170 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.895097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.895298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.895423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.895583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.895755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.895999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.896025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.896280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.896308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.896569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.896593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.896736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.896760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.896986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.897016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.897226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.897259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.897437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.897461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.897604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.897641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.897787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.897826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.898016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.898042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.898238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.898275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.898432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.898456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.898624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.898648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.898814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.898846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.899066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.899092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.899215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.899254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.899460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.899489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.899690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.899718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.899853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.899877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.900065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.900090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.900246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.900271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.900439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.900489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.900627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.900663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.900814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.900853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.901007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.901031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.901187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.901228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.901342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.901385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.901572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.901596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.901756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.901781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.902030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.902241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.902457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.902625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.902813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.902995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.903147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.903340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.903489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.903674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.903960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.903985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.904127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.171 [2024-10-07 09:53:41.904161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.171 qpair failed and we were unable to recover it. 00:32:47.171 [2024-10-07 09:53:41.904332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.904356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.904471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.904496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.904639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.904665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.904851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.904876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.905014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.905053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.905183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.905207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.905404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.905442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.905689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.905713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.905917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.905948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.906118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.906142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.906288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.906324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.906507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.906545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.906777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.906800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.906911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.906936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.907061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.907086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.907275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.907305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.907464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.907507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.907659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.907685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.907847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.907950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.908123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.908150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.908329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.908358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.908515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.908540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.908713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.908738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.908920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.908961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.909096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.909121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.910230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.910301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.910586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.910617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.910778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.910803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.910945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.910987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.911093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.911120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.911299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.911324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.911473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.911498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.911671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.911696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.911887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.911931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.912094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.912120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.912219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.912261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.912402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.912442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.913364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.913445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.913714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.913747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.913915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.913967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.914148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.914330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.914518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.914651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.914800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.914975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.915002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.915196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.915222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.172 [2024-10-07 09:53:41.915386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.172 [2024-10-07 09:53:41.915412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.172 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.915599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.915649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.915781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.915806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.916037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.916207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.916382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.916637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.916811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.916996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.917022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.917170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.917206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.917422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.917447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.917620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.917646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.917815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.917856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.918878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.918911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.919913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.919952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.920934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.920961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.921094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.921120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.921273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.921299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.921409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.921435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.173 [2024-10-07 09:53:41.921540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.173 [2024-10-07 09:53:41.921566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.173 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.921705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.921731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.921881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.921914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.922085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.922275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.922464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.922672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.922840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.922987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.923120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.923249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.923426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.923548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.459 qpair failed and we were unable to recover it. 00:32:47.459 [2024-10-07 09:53:41.923695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.459 [2024-10-07 09:53:41.923722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.923853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.923880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.924933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.924961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.925970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.925997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.926150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.926176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.926369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.926394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.926546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.926585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.926773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.926825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.927939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.927968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.928959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.928985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.929090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.929116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.929321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.929375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.929498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.929560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.929745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.929771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.929958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.929983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.930114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.930138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.930324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.930349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.930446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.930501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.460 qpair failed and we were unable to recover it. 00:32:47.460 [2024-10-07 09:53:41.930720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.460 [2024-10-07 09:53:41.930788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.931016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.931043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.931217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.931246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.931536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.931601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.931817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.931846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.932884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.932917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.933915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.933953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.934072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.934273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.934455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.934625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.934799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.934999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.935155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.935331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.935537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.935706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.935849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.935886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.936937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.936963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.937090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.937115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.937303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.937357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.937552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.937588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.937697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.937732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.461 [2024-10-07 09:53:41.937869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.461 [2024-10-07 09:53:41.937901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.461 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.938028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.938053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.938180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.938205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.938403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.938468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.938682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.938749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.938977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.939144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.939331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.939505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.939673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.939912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.939961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.940065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.940092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.940225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.940276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.940449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.940516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.940722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.940762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.940906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.940943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.941047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.941073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.941199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.941223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.941397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.941463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.941703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.941770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.942947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.942974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.943080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.943106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.943238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.943279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.943419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.943444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.943633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.943658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.943795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.943820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.944046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.944248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.944454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.944640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.944849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.944989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.945017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.945178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.945204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.462 [2024-10-07 09:53:41.945363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.462 [2024-10-07 09:53:41.945389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.462 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.945535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.945561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.945790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.945855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.946047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.946073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.946218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.946259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.946436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.946509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.946719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.946744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.946916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.946950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.947048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.947073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.947215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.947240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.947362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.947405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.947585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.947631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.947859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.947911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.948026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.948052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.948159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.948199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.948324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.948364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.948526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.948591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.948811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.948876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.949038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.949064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.949204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.949230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.949432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.949498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.949712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.949737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.949884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.949929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.950074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.950099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.950232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.950257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.950416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.950484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.950719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.950785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.951070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.951097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.951240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.951281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.951490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.951557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.951781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.951806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.951940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.951982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.952099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.952124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.952269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.952309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.952443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.952485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.952663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.952763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.953064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.953102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.953287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.953326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.463 [2024-10-07 09:53:41.953568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.463 [2024-10-07 09:53:41.953647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.463 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.953887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.953922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.954092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.954119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.954319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.954343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.954488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.954513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.954661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.954700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.954809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.954834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.955928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.955970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.956910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.956935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.957967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.957994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.958208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.958235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.958403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.958432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.958570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.958607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.958884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.958963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.959107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.464 [2024-10-07 09:53:41.959135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.464 qpair failed and we were unable to recover it. 00:32:47.464 [2024-10-07 09:53:41.959286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.959311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.959422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.959448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.959621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.959647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.959821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.959885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.960103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.960129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.960271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.960302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.960453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.960528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.960725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.960791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.961017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.961045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.961147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.961174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.961297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.961323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.961475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.961522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.961700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.961755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.962011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.962038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.962200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.962230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.962405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.962430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.962645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.962685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.962810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.962876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.963880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.963915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.964870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.964902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.965060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.965230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.965450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.965640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.965816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.965981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.966009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.966114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.966148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.966291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.465 [2024-10-07 09:53:41.966317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.465 qpair failed and we were unable to recover it. 00:32:47.465 [2024-10-07 09:53:41.966429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.966454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.966641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.966724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.966996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.967154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.967334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.967604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.967741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.967955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.967989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.968116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.968143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.968280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.968306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.968478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.968545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.968764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.968789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.968968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.968996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.969961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.969989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.970111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.970140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.970422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.970447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.970650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.970679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.970911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.970939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.971957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.971985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.972955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.972983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.973115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.973141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.973255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.973280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.973413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.466 [2024-10-07 09:53:41.973438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.466 qpair failed and we were unable to recover it. 00:32:47.466 [2024-10-07 09:53:41.973580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.973605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.973746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.973771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.973922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.973950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.974106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.974290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.974482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.974615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.974843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.974995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.975172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.975315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.975497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.975652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.975865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.975898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.976902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.976997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.977170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.977372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.977564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.977695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.977903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.977930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.978855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.978905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.979816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.467 [2024-10-07 09:53:41.979843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.467 qpair failed and we were unable to recover it. 00:32:47.467 [2024-10-07 09:53:41.980018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.980168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.980353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.980521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.980687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.980850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.980898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.981901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.981933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.982060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.982087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.982267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.982291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.982461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.982485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.982684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.982708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.982848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.982888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.983035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.983062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.983207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.983232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.983406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.983431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.983589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.983618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.983847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.983908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.984087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.984115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.984278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.984308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.984481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.984548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.984791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.984858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.985104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.985130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.985294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.985360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.985639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.985712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.986019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.986047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.986173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.986212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.986415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.986439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.986593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.986617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.468 [2024-10-07 09:53:41.986787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.468 [2024-10-07 09:53:41.986852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.468 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.987059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.987091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.987225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.987266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.987379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.987405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.987685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.987751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.988028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.988055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.988181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.988217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.988384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.988408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.988621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.988651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.988775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.988830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.989064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.989091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.989269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.989299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.989509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.989575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.989848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.989951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.990947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.990974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.991077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.991103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.991298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.991322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.991478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.991503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.991679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.991705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.991808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.991835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.992873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.992902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.993048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.993266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.993437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.993607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.993837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.993990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.994017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.994147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.994174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.994354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.469 [2024-10-07 09:53:41.994379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.469 qpair failed and we were unable to recover it. 00:32:47.469 [2024-10-07 09:53:41.994536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.994562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.994731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.994757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.994922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.994949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.995056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.995083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.995224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.995248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.995406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.995431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.995572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.995597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.995757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.995832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.996834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.996878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.997947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.997978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.998110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.998137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.998319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.998348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.998468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.998506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.998659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.998699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.998845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.998874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:41.999914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:41.999940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.000934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.000961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.001085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.470 [2024-10-07 09:53:42.001111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.470 qpair failed and we were unable to recover it. 00:32:47.470 [2024-10-07 09:53:42.001251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.001292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.001461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.001493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.001678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.001703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.001842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.001867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.001984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.002157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.002332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.002546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.002725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.002910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.002936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.003909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.003935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.004933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.004965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.005099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.005125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.005292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.005316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.005426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.005452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.005617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.005642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.005777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.005823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.006913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.006940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.007046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.007071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.471 qpair failed and we were unable to recover it. 00:32:47.471 [2024-10-07 09:53:42.007210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.471 [2024-10-07 09:53:42.007251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.007428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.007452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.007608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.007633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.007775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.007800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.007926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.007967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.008945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.008971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.009143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.009168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.009350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.009375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.009570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.009597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.009724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.009749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.009858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.009903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.010945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.010971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.011114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.011140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.011278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.011303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.011480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.011510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.011645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.011669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.011817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.011862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.012940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.012983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.013133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.013274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.013444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.013593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.013790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.472 [2024-10-07 09:53:42.013977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.472 [2024-10-07 09:53:42.014002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.472 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.014128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.014153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.014292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.014331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.014472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.014510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.014627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.014651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.014817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.014842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.015071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.015275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.015471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.015650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.015825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.015977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.016972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.016997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.017940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.017965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.018960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.018989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.019142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.019182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.019337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.019362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.019495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.019542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.473 qpair failed and we were unable to recover it. 00:32:47.473 [2024-10-07 09:53:42.019645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.473 [2024-10-07 09:53:42.019669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.019847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.019888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.020935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.020961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.021887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.021919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.022130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.022322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.022460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.022631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.022825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.022995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.023133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.023303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.023494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.023709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.023886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.023918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.024833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.024872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.025905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.025931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.026091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.026121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.474 [2024-10-07 09:53:42.026285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.474 [2024-10-07 09:53:42.026309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.474 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.026469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.026493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.026665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.026689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.026801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.026841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.026948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.026974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.027944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.027985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.028956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.028998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.029104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.029134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.029323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.029348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.029486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.029510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.029684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.029713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.029822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.029846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.030955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.030980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.031909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.031934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.032146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.032281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.032476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.032648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.475 [2024-10-07 09:53:42.032841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.475 qpair failed and we were unable to recover it. 00:32:47.475 [2024-10-07 09:53:42.032999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.033161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.033365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.033529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.033710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.033868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.033913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.034946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.034972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.035917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.035942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.036105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.036303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.036482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.036622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.036843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.036990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.037166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.037373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.037535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.037713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.037912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.037938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.038957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.038984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.039120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.039146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.039276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.039317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.039469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.476 [2024-10-07 09:53:42.039493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.476 qpair failed and we were unable to recover it. 00:32:47.476 [2024-10-07 09:53:42.039601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.039625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.039768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.039794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.039947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.039988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.040177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.040359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.040500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.040656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.040805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.040960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.041126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.041314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.041477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.041684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.041883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.041914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.042058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.042085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.042236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.042261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.042413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.042437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.042628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.042653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.042829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.042858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.043935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.043961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.044875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.044925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.045051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.045077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.045302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.045326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.045542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.045571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.045751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.477 [2024-10-07 09:53:42.045774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.477 qpair failed and we were unable to recover it. 00:32:47.477 [2024-10-07 09:53:42.046034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.046060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.046205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.046235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.046394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.046419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.046615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.046639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.046771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.046812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.046974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.047000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.047102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.047127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.047267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.047291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.047505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.047529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.047766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.047838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.048063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.048090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.048249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.048275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.048421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.048459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.048568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.048593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.048841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.048866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.049119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.049144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.049318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.049367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.049529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.049557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.049757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.049791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.049933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.049960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.050068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.050094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.050252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.050303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.050490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.050519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.050686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.050710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.050839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.050882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.051030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.051070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.051216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.051255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.051457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.051481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.051646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.051671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.051863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.051906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.052042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.052067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.052228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.052252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.052434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.052458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.052618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.052657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.052805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.052879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.053059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.053085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.478 [2024-10-07 09:53:42.053204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.478 [2024-10-07 09:53:42.053229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.478 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.053440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.053483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.053605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.053646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.053744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.053769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.053938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.053964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.054133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.054350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.054511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.054691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.054868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.054999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.055026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.055198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.055237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.055406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.055430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.055626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.055655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.055783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.055810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.056935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.056960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.057946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.057972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.058084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.058125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.058283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.058308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.058463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.058502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.058667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.058692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.058837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.058878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.059031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.059056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.059233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.059271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.059437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.059461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.059589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.059613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.059808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.059873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.060104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.060141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.479 [2024-10-07 09:53:42.060271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.479 [2024-10-07 09:53:42.060294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.479 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.060419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.060461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.060724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.060749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.060953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.060979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.061196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.061234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.061381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.061410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.061603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.061629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.061782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.061807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.061953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.061978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.062148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.062333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.062503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.062672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.062871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.062990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.063153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.063349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.063492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.063678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.063934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.063962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.064083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.064112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.064278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.064319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.064488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.064513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.064646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.064675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.064856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.064936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.065175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.065217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.065428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.065457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.065628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.065651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.065799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.065827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.065959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.065985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.066154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.066196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.066353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.066376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.066590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.066615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.066744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.066769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.066990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.067015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.067130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.067175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.067311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.067349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.067481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.067505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.480 [2024-10-07 09:53:42.067642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.480 [2024-10-07 09:53:42.067667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.480 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.067835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.067865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.068936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.068963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.069122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.069160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.069276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.069326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.069517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.069541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.069648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.069686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.069836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.069864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.070058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.070083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.070203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.070243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.070548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.070578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.070830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.070939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.071123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.071160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.071334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.071370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.071577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.071600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.071858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.071957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.072087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.072112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.072293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.072317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.072485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.072509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.072707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.072731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.072937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.072963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.073092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.073117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.073305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.073341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.073556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.073580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.073719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.073743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.073887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.073934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.074081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.074121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.074259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.074283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.074449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.074477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.074615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.074641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.074806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.074845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.075044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.075068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.075316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.075341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.075464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.481 [2024-10-07 09:53:42.075488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.481 qpair failed and we were unable to recover it. 00:32:47.481 [2024-10-07 09:53:42.075596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.075620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.075744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.075769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.075936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.075966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.076160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.076189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.076352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.076376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.076549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.076581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.076805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.076834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.077939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.077966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.078128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.078154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.078359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.078386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.078508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.078537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.078692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.078717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.078920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.078960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.079837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.079986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.080180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.080419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.080595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.080802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.080948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.080974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.081069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.081095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.081226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.081252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.081381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.081420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.482 [2024-10-07 09:53:42.081636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.482 [2024-10-07 09:53:42.081661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.482 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.081795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.081820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.082020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.082045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.082239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.082269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.082488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.082512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.082656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.082679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.082906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.082936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.083085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.083112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.083256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.083294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.083434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.083461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.083660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.083684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.083853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.083877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.084042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.084067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.084274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.084298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.084469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.084493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.084660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.084689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.084883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.084930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.085116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.085283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.085458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.085665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.085848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.085993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.086214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.086415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.086616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.086808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.086968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.086994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.087129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.087154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.087352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.087376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.087563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.087592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.087768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.087793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.087991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.088145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.088317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.088579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.088770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.088959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.088985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.089154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.483 [2024-10-07 09:53:42.089180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.483 qpair failed and we were unable to recover it. 00:32:47.483 [2024-10-07 09:53:42.089347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.089377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.089520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.089556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.089783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.089807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.089973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.090155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.090373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.090550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.090704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.090896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.090923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.091046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.091212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.091401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.091584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.091762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.091986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.092210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.092363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.092519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.092735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.092969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.092996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.093128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.093154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.093370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.093413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.093576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.093602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.093744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.093783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.093912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.093939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.094047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.094074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.094245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.094285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.094429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.094458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.094635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.094660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.094874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.094962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.095126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.095152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.095425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.095530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.095557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.095715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.095759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.095913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.095943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.096149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.096190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.096359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.096388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.484 qpair failed and we were unable to recover it. 00:32:47.484 [2024-10-07 09:53:42.096528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.484 [2024-10-07 09:53:42.096557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.096688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.096714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.096855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.096904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.097096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.097262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.097472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.097648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.097830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.097980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.098010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.098133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.098159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.098361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.098385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.098591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.098620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.098744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.098815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.099061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.099236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.099514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.099655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.099837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.099992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.100155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.100350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.100540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.100759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.100895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.100925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.101114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.101141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.101339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.101370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.101566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.101605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.101753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.101820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.102055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.102082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.102249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.102293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.102382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.102422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.102597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.102656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.102845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.102870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.485 [2024-10-07 09:53:42.103855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.485 [2024-10-07 09:53:42.103907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.485 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.104046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.104072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.104226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.104281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.104460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.104485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.104704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.104729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.104869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.104920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.105018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.105043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.105199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.105222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.105375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.105446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.105631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.105654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.105898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.105943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.106108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.106133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.106251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.106290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.106498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.106523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.106659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.106691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.106817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.106842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.107023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.107050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.107187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.107261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.107510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.107537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.107721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.107762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.107900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.107925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.108042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.108068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.108186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.108212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.108421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.108489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.108644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.108670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.108802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.108843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.109001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.109027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.109176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.109216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.109429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.109456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.109629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.109654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.109745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.109804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.110903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.110944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.111079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.111106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.111243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.486 [2024-10-07 09:53:42.111269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.486 qpair failed and we were unable to recover it. 00:32:47.486 [2024-10-07 09:53:42.111412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.111436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.111567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.111591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.111803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.111839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.111977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.112016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.112176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.112207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.112411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.112439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.112677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.112709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.112914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.112947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.113074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.113287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.113461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.113648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.113791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.113971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.114000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.114131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.114158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.114437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.114472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.114648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.114714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.114961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.114991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.115116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.115152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.115331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.115359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.115519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.115546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.115660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.115686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.115852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.115881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.116859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.116906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.117064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.117253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.117482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.117655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.487 [2024-10-07 09:53:42.117859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.487 qpair failed and we were unable to recover it. 00:32:47.487 [2024-10-07 09:53:42.117990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.118203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.118395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.118612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.118771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.118936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.118963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.119087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.119114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.119250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.119276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.119421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.119449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.119581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.119633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.119949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.119976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.120076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.120102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.120264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.120289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.120435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.120475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.120679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.120704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.120876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.120911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.121039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.121065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.121279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.121304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.121444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.121478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.121685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.121714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.121886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.121921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.122076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.122102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.122248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.122299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.122421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.122452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.122610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.122638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.122754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.122795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.123034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.123062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.123163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.123217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.124122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.124153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.124337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.124367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.124564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.124589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.124675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.124700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.124940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.124967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.125082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.125109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.125258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.125298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.125420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.125445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.125636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.125676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.125955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.488 [2024-10-07 09:53:42.125983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.488 qpair failed and we were unable to recover it. 00:32:47.488 [2024-10-07 09:53:42.126095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.126121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.126298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.126322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.126496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.126535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.126738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.126808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.127914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.127943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.128050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.128077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.128227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.128256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.128454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.128479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.128692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.128717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.128847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.128873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.129046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.129072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.129213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.129238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.129433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.129462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.129634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.129682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.129862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.129925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.130031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.130058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.130210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.130235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.130372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.130397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.130586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.130650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.130831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.130858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.131897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.131927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.132920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.132964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.133091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.133116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.489 [2024-10-07 09:53:42.133286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.489 [2024-10-07 09:53:42.133332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.489 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.133492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.133525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.133694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.133722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.133887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.133926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.134053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.134079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.134265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.134290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.134411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.134436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.134577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.134642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.134845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.134885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.135910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.135943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.136086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.136112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.136227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.136267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.136479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.136544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.136831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.136916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.137078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.137105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.137291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.137332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.139132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.139163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.139394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.139420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.139570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.139646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.139886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.139952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.140065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.140091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.140289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.140314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.140477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.140507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.140664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.140731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.140999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.141163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.141355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.141515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.141647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.141843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.141869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.142003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.142029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.142188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.142213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.142411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.142469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.142671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.490 [2024-10-07 09:53:42.142721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.490 qpair failed and we were unable to recover it. 00:32:47.490 [2024-10-07 09:53:42.142863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.142916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.143047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.143079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.143216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.143241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.143975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.144933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.144964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.145061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.145087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.145231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.145257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.145401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.145427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.145556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.145581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.145823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.145887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.146054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.146081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.146218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.146260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.146430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.146469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.146656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.146681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.146831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.146857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.147025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.147052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.147198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.147225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.147398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.147424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.147644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.147684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.147840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.147867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.148961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.148990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.149089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.149117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.149263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.149304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.491 qpair failed and we were unable to recover it. 00:32:47.491 [2024-10-07 09:53:42.149508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.491 [2024-10-07 09:53:42.149535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.149691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.149716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.149827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.149934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.150058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.150085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.150220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.150244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.150450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.150474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.150598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.150623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.150800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.150876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.151058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.151085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.151226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.151251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.151442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.151506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.151784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.151849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.152029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.152055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.152174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.152222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.152406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.152430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.152597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.152621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.152827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.152909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.153035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.153061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.153274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.153312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.153490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.153556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.153766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.153831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.154854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.154901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.155037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.155149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.155345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.155477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.155772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.155979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.156006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.156222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.156245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.156442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.156471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.156629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.156704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.156948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.156975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.157130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.157161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.492 [2024-10-07 09:53:42.157352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.492 [2024-10-07 09:53:42.157376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.492 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.157620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.157643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.157819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.157884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.158105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.158131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.158336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.158360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.158551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.158614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.158921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.158983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.159090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.159116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.159279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.159321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.159505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.159570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.159813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.159837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.160066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.160092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.160256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.160281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.160506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.160529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.160743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.160807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.161039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.161191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.161384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.161598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.161837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.161999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.162025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.162190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.162223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.162384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.162407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.162597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.162622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.162767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.162831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.163034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.163060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.163202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.163226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.163366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.163409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.163619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.163642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.163823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.163887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.164050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.164075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.164203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.164228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.164418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.164447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.164626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.164690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.164914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.164941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.165056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.165082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.165225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.165264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.165408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.165435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.165538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.493 [2024-10-07 09:53:42.165562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.493 qpair failed and we were unable to recover it. 00:32:47.493 [2024-10-07 09:53:42.165693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.165757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.165985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.166110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.166296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.166450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.166631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.166849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.166930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.167080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.167105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.167238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.167280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.167472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.167496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.167639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.167673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.167840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.167868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.168027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.168200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.168315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.168573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.168856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.168982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.169008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.169160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.169200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.169329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.169352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.169574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.169639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.169931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.169957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.170963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.170988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.171108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.171133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.171233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.171258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.171412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.171451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.171559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.171584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.172661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.172736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.172972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.172997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.173164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.173189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.173373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.173407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.173581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.173619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.173766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.173789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.494 qpair failed and we were unable to recover it. 00:32:47.494 [2024-10-07 09:53:42.174010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.494 [2024-10-07 09:53:42.174044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.174203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.174227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.174448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.174472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.174627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.174656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.174790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.174851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.175969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.175994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.176937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.176962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.177944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.177968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.178111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.178136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.178276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.178300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.178530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.178554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.178694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.178718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.178918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.178959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.179881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.179934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.180070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.180095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.180239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.180265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1666921 Killed "${NVMF_APP[@]}" "$@" 00:32:47.495 [2024-10-07 09:53:42.180420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 [2024-10-07 09:53:42.180463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.180637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.495 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:47.495 [2024-10-07 09:53:42.180662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.495 qpair failed and we were unable to recover it. 00:32:47.495 [2024-10-07 09:53:42.180847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:47.496 [2024-10-07 09:53:42.180925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:47.496 [2024-10-07 09:53:42.181178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.181221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b9 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.496 0 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:47.496 [2024-10-07 09:53:42.181430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.181456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.181633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.181658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.181815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.181844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.181989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.182130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.182337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.182520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.182699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.182861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.182907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.183088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.183114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.183237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.183277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.183531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.183560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.183741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.183806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.184843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.184974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.185142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.185370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.185559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 [2024-10-07 09:53:42.185771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1667544 00:32:47.496 [2024-10-07 09:53:42.185942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.185969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1667544 00:32:47.496 [2024-10-07 09:53:42.186071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.186097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1667544 ']' 00:32:47.496 [2024-10-07 09:53:42.186228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 [2024-10-07 09:53:42.186254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.496 qpair failed and we were unable to recover it. 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.496 [2024-10-07 09:53:42.186433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.496 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.497 [2024-10-07 09:53:42.186459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.497 [2024-10-07 09:53:42.186567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.186593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.497 [2024-10-07 09:53:42.186700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.186726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 [2024-10-07 09:53:42.186887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.186920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.187886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.187918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.188872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.188986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.189155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.189353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.189517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.189709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.189865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.189901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.190854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.190992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.191121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.191303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.191542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.191690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.191904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.191944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.192049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.192076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.192189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.192215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.497 qpair failed and we were unable to recover it. 00:32:47.497 [2024-10-07 09:53:42.192393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.497 [2024-10-07 09:53:42.192419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.192516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.192542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.192663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.192693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.192825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.192869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.193839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.193882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.194046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.194074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.194265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.194290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.194454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.194479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.194664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.194693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.194883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.194945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.195079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.195302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.195498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.195674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.195820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.195955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.196134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.196347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.196560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.196752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.196945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.196971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.197066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.197092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.197255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.197311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.197456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.197484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.197662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.197706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.197866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.197899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.498 [2024-10-07 09:53:42.198959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.498 [2024-10-07 09:53:42.198986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.498 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.199085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.199111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.199256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.199297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.199479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.199505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.199634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.199688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.199817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.199844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.200867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.200976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.201875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.201909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.202909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.202936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.203039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.203066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.203927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.203958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.204078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.204109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.204242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.204285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.204430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.204454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.205493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.205536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.205718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.205746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.205877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.205911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.206023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.206050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.206845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.206873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.207041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.207068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.207239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.207266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.207451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.207487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.207641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.207667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.207819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.207845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.208030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.208057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.208179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.499 [2024-10-07 09:53:42.208206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.499 qpair failed and we were unable to recover it. 00:32:47.499 [2024-10-07 09:53:42.208335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.208362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.208519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.208546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.208675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.208701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.208857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.208883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.208991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.209017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.209116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.209151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.209298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.209338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.209480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.209509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.210571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.210616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.210825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.210851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.210990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.211118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.211310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.211463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.211703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.211837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.211863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.212928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.212968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.213953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.213980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.214093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.214120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.214236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.214264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.214394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.500 [2024-10-07 09:53:42.214422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.500 qpair failed and we were unable to recover it. 00:32:47.500 [2024-10-07 09:53:42.214581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.214608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.214708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.214735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.214837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.214863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.214978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.215106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.215247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.215406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.215613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.215822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.215863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.216842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.216985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.217821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.217980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.218125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.218287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.218456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.218703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.218859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.218910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.219879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.219912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.220939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.220967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.221089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.221151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.221260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.221286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.501 [2024-10-07 09:53:42.221423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.501 [2024-10-07 09:53:42.221450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.501 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.221575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.221602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.221745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.221772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.221934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.221962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.222902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.222929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.223066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.223093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.224133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.224282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.224479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.224678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.224871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.224984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.225010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.225115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.225142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.225288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.225329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.225473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.225499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.225600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.225641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.226423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.226451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.226637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.226662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.226796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.226822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.227856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.227883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.228825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.228991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.229019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.502 [2024-10-07 09:53:42.229147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.502 [2024-10-07 09:53:42.229187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.502 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.229343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.229419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.229588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.229664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.229882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.229918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.230911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.230938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.231965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.231990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.232126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.232153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.232284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.232314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.232442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.232468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.232639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.232678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.232817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.232843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.233957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.233984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.234111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.234138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.234250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.234277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.234452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.234482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.234679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.234743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.235019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.235048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.235150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.235187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.235365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.235401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.235547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.235589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.235788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.235865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.236059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.236085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.236258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.236284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.236442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.236483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.236634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.236661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.236824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.236913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.237077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.237104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.237282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.237308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.237452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.503 [2024-10-07 09:53:42.237517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.503 qpair failed and we were unable to recover it. 00:32:47.503 [2024-10-07 09:53:42.237729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.237761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.237900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.237941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.238091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.238117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.238259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.238285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.238463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.238492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.238727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.238792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.239007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.239036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.239153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.239180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.239309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.239348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.239531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.239557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.239745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.239811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.241214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.241244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.241508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.241538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.241697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.241763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.241875] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:47.504 [2024-10-07 09:53:42.241975] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.504 [2024-10-07 09:53:42.241989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.242014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.242120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.242146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.242293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.242333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.242531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.242595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.242814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.242846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.243918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.243962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.244094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.244120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.244225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.244251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.244384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.244411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.244551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.244603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.244834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.244942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.245855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.245884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.246059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.246088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.504 qpair failed and we were unable to recover it. 00:32:47.504 [2024-10-07 09:53:42.246201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.504 [2024-10-07 09:53:42.246227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.505 [2024-10-07 09:53:42.246322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.505 [2024-10-07 09:53:42.246350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.505 [2024-10-07 09:53:42.246483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.505 [2024-10-07 09:53:42.246510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.505 [2024-10-07 09:53:42.246647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.505 [2024-10-07 09:53:42.246674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.505 [2024-10-07 09:53:42.246853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.505 [2024-10-07 09:53:42.246947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.505 [2024-10-07 09:53:42.248011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.505 [2024-10-07 09:53:42.248042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.505 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.248168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.248209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.248385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.248412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.249335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.249365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.249537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.249565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.249698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.249727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.249907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.249961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.250118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.250145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.250255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.250282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.790 [2024-10-07 09:53:42.250443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.790 [2024-10-07 09:53:42.250472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.790 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.250660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.250690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.250843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.250870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.251851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.251879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.252902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.252938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.253879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.253921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.254936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.254963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.255933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.255969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.256095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.256121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.256299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.256325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.791 qpair failed and we were unable to recover it. 00:32:47.791 [2024-10-07 09:53:42.256495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.791 [2024-10-07 09:53:42.256519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.256641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.256686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.256813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.256839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.256966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.256993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.257158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.257198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.257370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.257395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.257512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.257549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.257701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.257727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.257873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.257905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.258821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.258846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.259007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.259034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.259981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.260011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.260124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.260152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.260326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.260354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.260493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.260568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.260801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.260874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.261127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.261307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.261458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.261675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.261828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.261982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.262108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.262289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.262475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.262649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.262842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.262932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.263068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.263094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.263220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.263247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.263405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.263429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.263617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.792 [2024-10-07 09:53:42.263711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.792 qpair failed and we were unable to recover it. 00:32:47.792 [2024-10-07 09:53:42.263931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.263969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.264100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.264126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.264240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.264267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.264468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.264529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.264770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.264836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.265025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.265052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.265200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.265226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.265396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.265422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.265628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.265700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.265960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.265987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.266093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.266119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.266244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.266270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.266396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.266438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.266601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.266677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.266885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.266936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.267046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.267073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.267249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.267288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.267524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.267580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.267749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.267802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.267924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.267952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.268052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.268078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.268203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.268263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.268397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.268471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.268695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.268760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.269011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.269038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.269232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.269304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.269531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.269560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.269716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.269745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.270005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.270032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.270134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.270164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.270356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.270395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.270527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.270553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.270827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.270907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.271060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.271086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.271267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.271292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.271453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.271481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.271662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.271723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.271947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.793 [2024-10-07 09:53:42.271973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.793 qpair failed and we were unable to recover it. 00:32:47.793 [2024-10-07 09:53:42.272100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.272126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.272298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.272331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.272543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.272607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.272859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.272888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.273028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.273054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.273214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.273253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.273350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.273391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.273545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.273574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.273785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.273849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.274024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.274050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.274221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.274246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.274363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.274418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.274668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.274697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.274870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.274916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.275035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.275060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.275262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.275290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.275404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.275429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.275722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.275786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.276929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.276970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.277107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.277132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.277284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.277308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.277480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.277546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.277794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.277823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.278003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.278029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.278185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.278210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.278409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.278437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.278574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.278613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.278755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.278816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.794 [2024-10-07 09:53:42.279051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.794 [2024-10-07 09:53:42.279078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.794 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.279225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.279250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.279374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.279413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.279557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.279582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.279737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.279777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.279945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.279970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.280139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.280165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.280377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.280402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.280557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.280631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.280885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.280936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.281076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.281102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.281279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.281319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.281475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.281505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.281649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.281689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.281821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.281861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.282049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.282225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.282427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.282598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.282820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.282969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.283009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.283133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.283174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.283299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.283338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.283542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.283606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.283825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.283853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.284907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.284933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.795 [2024-10-07 09:53:42.285073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.795 [2024-10-07 09:53:42.285100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.795 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.285256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.285295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.285432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.285461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.285595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.285620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.285829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.285853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.286013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.286040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.286213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.286253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.286430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.286454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.286605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.286643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.286845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.286869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.287964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.287990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.288150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.288175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.288345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.288374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.288529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.288569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.288694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.288734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.288870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.288903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.289048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.289074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.289211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.289236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.289425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.289453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.289607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.289647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.289801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.289826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.290001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.290037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.290207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.290253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.290403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.290427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.290645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.290674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.290856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.290957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.291138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.291348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.291499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.291629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.291809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.291989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.292016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.292126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.292152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.292283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.796 [2024-10-07 09:53:42.292308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.796 qpair failed and we were unable to recover it. 00:32:47.796 [2024-10-07 09:53:42.292541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.292567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.292733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.292759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.292854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.292880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.293916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.293942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.294109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.294283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.294457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.294617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.294766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.294982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.295124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.295321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.295532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.295718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.295927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.295953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.296076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.296102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.296228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.296269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.296450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.296475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.296658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.296687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.296850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.296875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.297965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.297991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.298117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.298143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.298263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.298289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.298474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.298514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.298663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.298703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.298955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.298988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.299137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.299163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.299317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.299343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.797 qpair failed and we were unable to recover it. 00:32:47.797 [2024-10-07 09:53:42.299516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.797 [2024-10-07 09:53:42.299545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.299724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.299749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.299934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.299961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.300094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.300119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.300298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.300322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.300511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.300536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.300702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.300731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.300902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.300928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.301124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.301313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.301453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.301668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.301814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.301990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.302016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.302184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.302214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.302343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.302369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.302538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.302587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.302782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.302845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.303961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.303987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.304130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.304156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.304345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.304370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.304545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.304569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.304720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.304749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.304906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.304934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.305076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.305102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.305279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.305307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.305480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.305505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.305685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.305719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.305888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.305958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.306064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.306090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.306234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.306259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.306439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.306463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.306632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.306660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.798 qpair failed and we were unable to recover it. 00:32:47.798 [2024-10-07 09:53:42.306828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.798 [2024-10-07 09:53:42.306852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.307059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.307249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.307471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.307685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.307870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.307993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.308175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.308357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.308561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.308748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.308905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.308946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.309072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.309097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.309301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.309330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.309514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.309538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.309710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.309734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.309933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.309962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.310968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.310994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.311121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.311147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.311282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.311307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.311498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.311526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.311706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.311763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.311996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.312022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.312156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.312198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.312370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.312394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.312503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.312543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.312765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.312829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.313091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.313252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.313408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.313632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.313823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.313974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.314018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.799 [2024-10-07 09:53:42.314156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.799 [2024-10-07 09:53:42.314196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.799 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.314334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.314363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.314584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.314613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.314793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.314817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.314985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.315137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.315387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.315562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.315693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.315860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.315906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.316065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.316264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.316435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.316644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.316856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.316984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.317139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.317326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.317494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.317729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.317912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.317941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.318063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.318088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.318263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.318304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.318508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.318537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.318699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.318723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.318865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.318911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.319140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.319169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.319346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.319370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.319544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.319569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.319748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.319778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.319922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.319947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.320058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.320084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.320233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.320265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.320387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.320431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.800 qpair failed and we were unable to recover it. 00:32:47.800 [2024-10-07 09:53:42.320585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.800 [2024-10-07 09:53:42.320610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.320805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.320829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.320979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.321004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.321189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.321221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.321394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.321423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.321554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.321579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.321738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.321803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.322035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.322065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.322201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.322241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.322385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.322425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.322576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.322602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.322770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.322842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.323079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.323254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.323460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.323644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.323830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.323974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.324134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.324349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.324542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.324725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.324930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.324960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.325916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.325942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.326075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.326100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.326221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.801 [2024-10-07 09:53:42.326319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.326343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.326523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.326552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.326704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.326728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.326923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.326979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.327107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.327132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.327311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.327335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.327442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.327467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.327610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.327635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.801 qpair failed and we were unable to recover it. 00:32:47.801 [2024-10-07 09:53:42.327728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.801 [2024-10-07 09:53:42.327753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.327903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.327945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.328063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.328103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.328281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.328306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.328479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.328503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.328661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.328690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.328867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.328898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.329834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.329859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.330003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.330029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.330163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.330215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.330406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.330430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.330649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.330673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.330815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.330858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.331932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.331973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.332141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.332166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.332310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.332351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.332494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.332537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.332720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.332744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.332915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.332955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.333951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.333977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.334134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.334163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.334344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.334369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.334556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.334580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.802 [2024-10-07 09:53:42.334755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-10-07 09:53:42.334783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.802 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.334922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.334948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.335144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.335169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.335317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.335360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.335528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.335553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.335699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.335778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.336918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.336945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.337930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.337972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.338144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.338285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.338451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.338645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.338823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.338976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.339171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.339372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.339503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.339702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.339886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.339917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.340112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.340295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.340478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.340656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.340857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.340988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.341015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.341184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.341209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.341343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.341384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.341528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.341553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.341700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.803 [2024-10-07 09:53:42.341756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.803 qpair failed and we were unable to recover it. 00:32:47.803 [2024-10-07 09:53:42.341970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.341996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.342125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.342151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.342340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.342365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.342539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.342583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.342684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.342709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.342881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.342912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.343070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.343095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.343284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.343308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.343456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.343481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.343661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.343689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.343821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.343859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.344073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.344248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.344418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.344629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.344816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.344987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.345013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.345189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.345229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.345388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.345417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.345604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.345629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.345798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.345822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.346009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.346039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.346196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.346221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.346367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.346406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.346576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.346601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.346801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.346871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.347082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.347108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.347263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.347292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.347444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.347469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.347583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.347618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.347761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.347787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.348033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.348058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.348237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.348276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.804 qpair failed and we were unable to recover it. 00:32:47.804 [2024-10-07 09:53:42.348456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.804 [2024-10-07 09:53:42.348485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.348648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.348673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.348825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.348850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.349027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.349222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.349430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.349619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.349796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.349988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.350014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.350138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.350163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.350349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.350373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.350593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.350617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.350748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.350813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.351048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.351085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.351207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.351232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.351379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.351423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.351602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.351627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.351771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.351809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.352005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.352035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.352259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.352284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.352480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.352504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.352659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.352688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.352835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.352875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.353935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.353961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.354104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.354130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.354326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.354355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.354496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.354521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.354656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.354701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.354836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.354885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.355104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.355129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.355276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.355302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.355451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.355494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.355663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.355702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.805 [2024-10-07 09:53:42.355864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.805 [2024-10-07 09:53:42.355953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.805 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.356120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.356144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.356285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.356324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.356498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.356523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.356710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.356738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.356853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.356897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.357959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.357985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.358161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.358186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.358367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.358396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.358506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.358532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.358672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.358696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.358815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.358841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.359031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.359056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.359262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.359286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.359451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.359479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.359671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.359695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.359848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.359887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.360081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.360233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.360422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.360602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.360789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.360985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.361010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.361114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.361155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.361289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.361328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.361512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.361536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.361731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.361796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.362073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.362236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.362404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.362624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.362821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.362977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.363007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.363118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.806 [2024-10-07 09:53:42.363143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.806 qpair failed and we were unable to recover it. 00:32:47.806 [2024-10-07 09:53:42.363291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.363316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.363478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.363522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.363671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.363696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.363866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.363895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.364967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.364993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.365154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.365196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.365379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.365404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.365514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.365554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.365725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.365766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.365941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.365981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.366103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.366144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.366305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.366330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.366516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.366540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.366711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.366775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.367032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.367058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.367233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.367258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.367439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.367462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.367618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.367664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.367848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.367873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.368946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.368972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.369143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.369187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.369329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.369358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.369531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.369556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.369714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.369738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.369874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.369951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.370112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.370138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.370317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.370357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.370512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.370540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.807 qpair failed and we were unable to recover it. 00:32:47.807 [2024-10-07 09:53:42.370714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.807 [2024-10-07 09:53:42.370768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.370993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.371156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.371365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.371549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.371739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.371886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.371918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.372095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.372136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.372298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.372327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.372481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.372506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.372691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.372715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.372900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.372942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.373969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.373999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.374114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.374139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.374287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.374312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.374454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.374479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.374669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.374694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.374871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.374927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.375056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.375095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.375260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.375306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.375469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.375494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.375657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.375686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.375822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.375910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.376080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.376106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.376276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.376318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.376482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.376507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.376687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.376712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.376866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.376900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.377022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.377047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.377210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.377250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.377409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.377434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.377592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.377631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.377824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.377906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-10-07 09:53:42.378040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-10-07 09:53:42.378066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.378229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.378268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.378440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.378464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.378610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.378653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.378829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.378853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.379880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.379914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.380965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.380991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.381161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.381201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.381343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.381371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.381512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.381552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.381733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.381758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.381904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.381930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.382068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.382278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.382461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.382657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.382841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.382986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.383195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.383382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.383550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.383706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.383878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.383924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.384077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.384102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-10-07 09:53:42.384255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-10-07 09:53:42.384294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.384435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.384475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.384617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.384657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.384846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.384870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.385064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.385089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.385251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.385292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.385453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.385477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.385663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.385687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.385862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.385898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.386956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.386982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.387133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.387157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.387336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.387360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.387491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.387517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.387662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.387687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.387817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.387843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.388059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.388243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.388416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.388618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.388793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.388985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.389200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.389371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.389560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.389716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.389912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.389937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.390123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.390148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.390308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.390341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.390501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.390526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.390644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.390669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.390816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.390841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.391004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.391029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.391208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.391248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-10-07 09:53:42.391386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-10-07 09:53:42.391415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.391582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.391622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.391787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.391812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.392010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.392036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.392209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.392234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.392417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.392441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.392602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.392631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.392795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.392819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.393903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.393930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.394065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.394106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.394271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.394300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.394427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.394466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.394651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.394675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.394848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.394877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.395055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.395249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.395437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.395597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.395811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.395965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.396006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.396148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.396173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.396318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.396343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.396530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.396571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.396742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.396794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.397041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.397068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.397244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.397274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.397427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.397451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.397637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.397662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.397834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.397863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.398042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.398071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.398257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.398282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.398441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.398471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.398613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.398651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-10-07 09:53:42.398792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-10-07 09:53:42.398872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.399102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.399129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.399289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.399313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.399459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.399484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.399664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.399694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.399808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.399847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.400903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.400944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.401102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.401289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.401462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.401667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.401822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.401994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.402176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.402347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.402539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.402725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.402945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.402974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.403132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.403158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.403323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.403349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.403531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.403559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.403716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.403745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.403937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.403964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.404097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.404123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.404308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.404332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.404479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.404503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.404688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.404717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.404869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.404898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.405111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.405309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.405443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.405615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.405843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.405990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-10-07 09:53:42.406016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-10-07 09:53:42.406195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.406220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.406385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.406410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.406569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.406594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.406721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.406746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.406916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.406959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.407128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.407153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.407337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.407361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.407498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.407523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.407667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.407709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.407869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.407918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.408051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.408079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.408250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.408276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.408435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.408460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.408640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.408668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.408836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.408917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.409049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.409075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.409233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.409258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.409433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.409458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.409633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.409658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.409820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.409844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.410014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.410039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.410216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.410256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.410426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.410455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.410642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.410667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.410834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.410931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.411114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.411140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.411324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.411349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.411502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.411526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.411639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.411664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.411846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.411885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.412069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.412094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.412192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.412217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.412420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.412445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-10-07 09:53:42.412586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-10-07 09:53:42.412610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.412802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.412831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.413880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.413925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.414050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.414075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.414218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.414243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.414400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.414439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.414634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.414658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.414817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.414841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.415013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.415040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.415198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.415224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.415363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.415403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.415575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.415600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.415768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.415834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.416080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.416259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.416440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.416648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.416839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.416984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.417193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.417355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.417580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.417729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.417953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.417979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.418156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.418197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.418369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.418393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.418543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.418582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.418740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.418779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.418935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.418961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.419083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.419109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.419299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.419323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.419440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.419479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.419594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.419619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-10-07 09:53:42.419802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-10-07 09:53:42.419842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.419998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.420039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.420139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.420165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.420360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.420385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.420557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.420596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.420778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.420806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.420978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.421009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.421181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.421221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.421397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.421425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.421565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.421604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.421788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.421813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.421973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.422002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.422147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.422172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.422329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.422368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.422526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.422555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.422741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.422798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.423966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.423992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.424131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.424157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.424298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.424322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.424480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.424505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.424657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.424682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.424854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.424900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.425055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.425080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.425266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.425290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.425447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.425476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.425608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.425647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.425791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.425852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.426089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.426114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.426306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-10-07 09:53:42.426331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-10-07 09:53:42.426480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.426505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.426648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.426690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.426866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.426917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.427962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.427988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.428161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.428187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.428373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.428402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.428539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.428564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.428749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.428778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.428966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.428997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.429144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.429169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.429313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.429352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.429496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.429540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.429724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.429772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.429991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.430153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.430390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.430531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.430736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.430925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.430950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.431970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.431997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.432165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.432205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.432334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.432362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.432496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.432521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.432648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.432674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.432812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.432837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.433027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.433054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.433190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.433215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.433357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-10-07 09:53:42.433399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-10-07 09:53:42.433546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.433570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.433718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.433759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.433937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.433964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.434109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.434278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.434433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.434606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.434809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.434983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.435200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.435352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.435562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.435755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.435950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.435991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.436158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.436191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.436377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.436402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.436576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.436600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.436777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.436806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.436925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.436951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.437903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.437929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.438951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.438977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.439115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.439142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.439296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.439339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.439493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.439520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.439706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.439733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.439900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.439929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.440068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.440095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.440237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.440278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-10-07 09:53:42.440460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-10-07 09:53:42.440489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.440638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.440665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.440802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.440842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.441841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.441881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.442035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.442060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.442258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.442283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.442463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.442487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.442642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.442674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.442820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.442860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.443897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.443938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.444084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.444125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.444257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.444283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.444463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.444488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.444652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.444676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.444867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.444904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.445051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.445077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.445221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.445246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.445369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.445411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.445583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.445612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.445791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.445858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.446039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.446065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.446243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.446282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.446439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.446463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-10-07 09:53:42.446622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-10-07 09:53:42.446647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.446818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.446843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.446985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.447159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.447349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.447510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.447722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.447928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.447954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.448115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.448141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.448290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.448318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.448475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.448502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.448610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.448636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.448777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.448821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.449847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.449981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.450146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.450355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.450526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.450710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.450906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.450936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.451929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.451955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.452093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.452285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.452481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.452628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.452786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.452990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.453016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-10-07 09:53:42.453170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-10-07 09:53:42.453212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.453316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.453341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.453514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.453554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.453683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.453710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.453834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.453860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.453993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.454177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.454323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.454496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.454713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.454921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.454963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.455069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.455095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.455245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.455286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.455470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.455516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.455683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.455730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.455856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.455905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.456903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.456933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.457942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.457969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.458102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.458128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.458278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.458303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.458430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.458455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.458604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.458630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.458843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.458872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.459003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.459030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.459202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.459243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.459353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.459398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.459564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.459590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.459767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.459839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.460060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-10-07 09:53:42.460087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-10-07 09:53:42.460261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.460286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.460427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.460498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.460722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.460754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.460865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.460897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.461910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.461953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.462099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.462259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.462424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.462621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.462911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.462994] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.821 [2024-10-07 09:53:42.463023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.463027] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.821 [2024-10-07 09:53:42.463047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.821 [2024-10-07 09:53:42.463053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.821 [2024-10-07 09:53:42.463060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.463072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.821 [2024-10-07 09:53:42.463190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.463218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.463409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.463461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.463629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.463680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.463832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.463858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e2c000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.463986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.464014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.464145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.464171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.464327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.464402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.464626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.464701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.464948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.464975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.465108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.465136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.465295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.465337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.465338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:32:47.821 [2024-10-07 09:53:42.465394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:32:47.821 [2024-10-07 09:53:42.465468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.465530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.465444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:32:47.821 [2024-10-07 09:53:42.465447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:47.821 [2024-10-07 09:53:42.465724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.465771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.465954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.465981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.466123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.466149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.466284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.466311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.466499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.466549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.466753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.466802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-10-07 09:53:42.467001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-10-07 09:53:42.467028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.467191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.467218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.467376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.467402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.467578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.467632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.467842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.467869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.467975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.468165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.468305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.468462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.468678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.468926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.468953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.469937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.469964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.470143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.470183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.470321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.470370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.470550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.470612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.470751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.470802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.470959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.470987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.471873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.471980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.472134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.472324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.472509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.472693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.472899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.472928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.473089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.473118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-10-07 09:53:42.473263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-10-07 09:53:42.473315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.473455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.473507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.473688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.473740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.473872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.473911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.474052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.474080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.474213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.474239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.474389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.474439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.474629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.474659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.474774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.474820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.475961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.475990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.476179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.476348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.476521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.476682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.476869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.476979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.477936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.477963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.478073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-10-07 09:53:42.478099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-10-07 09:53:42.478234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.478261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.478395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.478421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.478556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.478599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.478760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.478788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.478940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.478981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.479157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.479287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.479460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.479674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.479820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.479974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.480864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.480901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.481846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.481988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.482119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.482315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.482575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.482753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.482915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.482943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.483936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.483963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.484099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.484125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.484225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.484251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.484406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.484449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-10-07 09:53:42.484558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-10-07 09:53:42.484587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.484748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.484774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.484913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.484939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.485903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.485930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.486110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.486151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.486288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.486317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.486479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.486538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.486715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.486747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.486900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.486930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.487073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.487100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.487236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.487281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.487481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.487508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.487641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.487693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.487885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.487944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.488152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.488322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.488475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.488636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.488818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.488974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.489843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.489983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.490948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.490976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.491110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.491136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.491310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-10-07 09:53:42.491340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-10-07 09:53:42.491517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.491547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.491694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.491720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.491886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.491927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.492841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.492906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.493918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.493944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.494816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.494843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.495011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.495039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.495203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.495229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.495366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.495392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.495567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.495618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.495842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.495869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.496843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.496988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.497835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-10-07 09:53:42.497977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-10-07 09:53:42.498003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.498174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.498201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.498334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.498360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.498526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.498552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.498683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.498727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.498867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.498900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.499967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.499994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.500949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.500975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.501095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.501122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.501283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.501308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.501479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.501506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.501657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.501707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.501895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.501921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.502916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.502943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.503048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.503074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.503200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.503226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.503403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.503429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.503598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.503624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-10-07 09:53:42.503771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-10-07 09:53:42.503820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.504064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.504091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.504226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.504252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.504427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.504476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.504687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.504713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.504846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.504872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.505898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.505924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.506968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.506995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.507968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.507995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.508139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.508165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.508323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.508350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.508507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.508533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.508666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.508725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.508927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.508954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.509066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.509091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.509252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.509295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.509465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.509491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.509649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.509685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.509831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.509861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.510012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.510038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.510152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.510179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.510283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.510309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.510497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.510523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.828 qpair failed and we were unable to recover it. 00:32:47.828 [2024-10-07 09:53:42.510683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.828 [2024-10-07 09:53:42.510710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.510843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.510887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.511937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.511964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.512952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.512979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.513139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.513165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.513326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.513353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.513526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.513576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.513733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.513759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.513917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.513944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.514932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.514959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.515095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.515121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.515279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.515305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.515435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.515462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.515624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.515685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.515854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.515880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.516958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.516985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.517111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.517138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.517266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.517309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.829 [2024-10-07 09:53:42.517446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.829 [2024-10-07 09:53:42.517473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.829 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.517627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.517653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.517831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.517879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.518916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.518942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.519957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.519983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.520118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.520144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.520266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.520292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.520421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.520448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.520654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.520680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.520786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.520813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.521875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.521909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.522949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.522976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.523134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.523160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.523302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.523328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.523486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.523513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.523683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.523712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.523882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.523917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.524077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.524103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.524275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.830 [2024-10-07 09:53:42.524304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.830 qpair failed and we were unable to recover it. 00:32:47.830 [2024-10-07 09:53:42.524467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.524493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.524622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.524647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.524791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.524816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.524975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.525162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.525357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.525552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.525738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.525868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.525899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.526082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.526109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.526246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.526272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.526412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.526454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.526634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.526660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.526790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.526855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.527057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.527083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.527217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.527244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.527404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.527430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.527611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.527641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.527808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.527857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.528913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.528939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.529972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.529998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.530159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.530186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.530358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.530388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.530566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.530592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.530713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.530739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.530927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.530953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.531122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.531148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.531284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.531315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.831 qpair failed and we were unable to recover it. 00:32:47.831 [2024-10-07 09:53:42.531445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.831 [2024-10-07 09:53:42.531472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.531629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.531655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.531815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.531841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.531978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.532165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.532324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.532484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.532687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.532872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.532906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.533903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.533930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.534932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.534959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.535092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.535118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.535264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.535290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.535418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.535444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.535604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.535647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.535832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.535881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.536072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.536100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.536233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.536274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.536423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.536449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.536608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.536634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.536764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.536823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.537048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.537074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.537177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.537203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.537335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.832 [2024-10-07 09:53:42.537361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.832 qpair failed and we were unable to recover it. 00:32:47.832 [2024-10-07 09:53:42.537540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.537566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.537725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.537751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.537873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.537921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.538854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.538904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.539866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.539899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.540884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.540924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.541113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.541139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.541270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.541296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.541433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.541476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.541617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.541643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.541800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.541860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.542078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.542104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.542237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.542263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.542362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.542389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.542559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.542585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.542786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.542834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.543842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.543980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.544008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.544142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.544167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.544355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.544381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.833 [2024-10-07 09:53:42.544475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.833 [2024-10-07 09:53:42.544501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.833 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.544643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.544669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.544858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.544918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.545075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.545102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.545238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.545281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.545458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.545493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.545627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.545654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.545810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.545854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.546953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.546998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.547943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.547969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.548910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.548954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.549131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.549157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.549279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.549305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.549467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.549509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.549675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.549701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.549856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.549913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.550865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.550998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.551025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.551163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.551190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.551318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.551343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.551480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.551506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.834 [2024-10-07 09:53:42.551664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.834 [2024-10-07 09:53:42.551691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.834 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.551851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.551877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.552862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.552887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.553911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.553938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.554077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.554119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.554261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.554287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.554449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.554475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.554629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.554658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.554841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.554867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.555848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.555874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.556047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.556073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.556240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.556269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.556418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.556444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.556601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.556627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.556752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.556804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.557810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.557854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.835 qpair failed and we were unable to recover it. 00:32:47.835 [2024-10-07 09:53:42.558949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.835 [2024-10-07 09:53:42.558976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.559104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.559130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.559286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.559333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.559482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.559508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.559667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.559692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.559828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.559888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.560085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.560112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.560247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.560273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.560434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.560478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.560651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.560677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.560798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.560852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.561919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.561946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.562948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.562974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.563133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.563168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.563407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.563433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.563602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.563628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.563775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.563804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.564013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.564040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.564170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.564197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.564439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.564468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.564703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.564729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.564952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.564979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.565132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.565158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.565306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.565332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.565491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.565517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.565696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.565725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.565858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.565884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.836 qpair failed and we were unable to recover it. 00:32:47.836 [2024-10-07 09:53:42.566050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.836 [2024-10-07 09:53:42.566075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.566178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.566204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.566337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.566363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.566522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.566547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.566686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.566730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.566911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.566950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.567968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.567995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.568165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.568191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.568361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.568389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.568559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.568585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.568744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.568769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.568901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.568945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.569117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.569143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.569300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.569326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.569500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.569529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.569713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.569762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.569925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.569952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.570078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.570103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.570265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.570295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.570459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.570485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.570652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.570680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.570847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.570873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.571045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.571071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.571240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.571269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.571446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.571471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.571641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.837 [2024-10-07 09:53:42.571667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.837 qpair failed and we were unable to recover it. 00:32:47.837 [2024-10-07 09:53:42.571818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.571846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.572023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.572054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.572188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.572214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.572384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.572413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.572600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.572629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.572856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.572881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.573912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.573939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.574101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.574130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.574262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.574290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.574466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.574492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.574640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.574669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.574845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.574920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.575078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.575104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.575223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.575249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.575480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.575506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.575659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.575690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.575909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.575935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.576120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.576146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.576367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.576392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.576598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.576627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.576814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.576843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.577031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.577057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.577203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.577233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.577463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.577490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.577704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.577730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.577914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.577941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.578075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.578101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.578206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.578232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.578429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.578457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.578620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.578646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.578783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.578816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.579012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.579052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.579237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.579273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.579478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.579503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.579637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.579663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:47.838 [2024-10-07 09:53:42.579851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.838 [2024-10-07 09:53:42.579877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:47.838 qpair failed and we were unable to recover it. 00:32:48.106 [2024-10-07 09:53:42.580026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-10-07 09:53:42.580057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.580283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.580309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.580521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.580547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.580775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.580801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.580950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.580980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.581144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.581264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.581448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.581616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.581821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.581983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.582172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.582326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.582479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.582678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.582862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.582887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.583027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.583243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.583447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.583631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.583826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.583987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.584122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.584280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.584546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.584731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.584903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.584929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.585954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.585980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.586081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.586107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.586274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.586318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.586516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.586542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.586701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.586727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.586919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-10-07 09:53:42.586963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-10-07 09:53:42.587185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.587211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.587344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.587369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.587507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.587566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.587701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.587727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.587896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.587922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.588120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.588148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.588329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.588362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.588538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.588564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.588754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.588783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.589022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.589049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.589212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.589238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.589405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.589434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.589571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.589597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.589757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.589813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.590062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.590088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.590217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.590243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.590408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.590433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.590664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.590692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.590804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.590830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.591050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.591077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.591255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.591283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.591458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.591494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.591708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.591734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.591924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.591953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.592101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.592127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.592346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.592371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.592569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.592598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.592764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.592812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.593105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.593131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.593278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.593306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.593416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.593441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.593602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.593627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.593852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.593880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.594066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.594092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.594213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.594238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.594441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.594470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.594637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.594663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-10-07 09:53:42.594819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-10-07 09:53:42.594844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.595097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.595124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.595290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.595316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.595522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.595547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.595688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.595717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.595830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.595860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.596048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.596075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.596238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.596267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.596400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.596426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.596624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.596650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.596811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.596839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.597010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.597036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.597216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.597241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.597475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.597503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.597630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.597655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.597833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.597859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.598012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.598038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.598198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.598224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.598422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.598448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.598593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.598622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.598811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.598862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.599115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.599140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.599360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.599389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.599615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.599640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.599858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.599883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.600106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.600132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.600347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.600373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.600466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.600491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.600692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.600735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.600978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.601163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.601366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.601553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.601709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.601899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.601944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.602061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.602088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.602328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.602353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.602521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.602549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-10-07 09:53:42.602753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-10-07 09:53:42.602779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.602916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.602942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.603075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.603101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.603319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.603345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.603539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.603565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.603708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.603737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.603905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.603931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.604113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.604143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.604422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.604471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.604667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.604693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.604822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.604847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.605095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.605122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.605285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.605310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.605415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.605440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.605578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.605604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.605802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.605850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.606059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.606085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.606257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.606286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.606446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.606471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.606626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.606652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.606778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.606829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.607129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.607155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.607431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.607457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.607707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.607736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.607927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.607953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.608105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.608131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.608356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.608386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.608554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.608580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.110 [2024-10-07 09:53:42.608723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.608749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:48.110 [2024-10-07 09:53:42.608905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.608934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:48.110 [2024-10-07 09:53:42.609107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.110 [2024-10-07 09:53:42.609268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.609468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.609621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.609779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.609913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.609939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.610054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-10-07 09:53:42.610080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-10-07 09:53:42.610282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.610308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.610420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.610445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.610639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.610665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.610850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.610876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.611876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.611938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.612113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.612299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.612470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.612666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.612823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.612987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.613151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.613319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.613505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.613638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.613794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.613820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.614866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.614897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.615025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.615051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.615207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.615233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.615390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.615417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.615579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.615605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-10-07 09:53:42.615737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-10-07 09:53:42.615765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.615925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.615952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.616863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.616896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.617927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.617953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.618950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.618976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.619969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.619995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.620941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.620968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.621943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-10-07 09:53:42.621970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-10-07 09:53:42.622071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.622232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.622398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.622600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.622755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.622933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.622960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.623073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.623100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.623281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.623307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.623497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.623523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.623656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.623682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.623865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.623924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.624960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.624986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.625843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.625872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.626885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.626920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.627955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.627982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.628086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.628112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.628237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.628263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-10-07 09:53:42.628387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-10-07 09:53:42.628413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.628550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.628576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.628720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.628746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.628879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.628913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.629886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.629994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.630881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.630989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.631846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.631872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e38000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.632881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.632919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.114 [2024-10-07 09:53:42.633490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.114 [2024-10-07 09:53:42.633827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-10-07 09:53:42.633918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-10-07 09:53:42.633946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.114 [2024-10-07 09:53:42.634056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.634229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.634353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.634532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.634740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.634901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.634929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.635863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.635896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.636880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.636917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.637956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.637983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.638968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.638995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.639134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.639288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.639418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.639603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.639797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.639963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.640002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.640143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.640170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.640258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.640284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-10-07 09:53:42.640444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-10-07 09:53:42.640469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.640608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.640634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.640775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.640800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.640929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.640955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.641065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.641091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.641257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.641283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.641435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.641488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.641673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.641723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.641865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.641896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.642864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.642898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.643971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.643998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.644137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.644162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.644292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.644336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.644497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.644522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.644702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.644731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.644902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.644956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.645925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.645968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.646070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.646095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.646227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-10-07 09:53:42.646253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-10-07 09:53:42.646412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.646471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.646664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.646690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.646870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.646902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.647453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.647506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.647756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.647785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.647936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.647963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.648918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.648944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.649943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.649970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.650104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.650129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.650284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.650309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.650487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.650516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.650649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.650677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.650911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.650937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.651073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.651099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.651346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.651398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.651542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.651567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.651697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.651722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.651914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.651961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.652100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.652125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.652294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.652319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.652509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.652566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.652822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.652850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.653033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.653262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.653478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.653672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.653846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.653991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.654017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.654156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-10-07 09:53:42.654181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-10-07 09:53:42.654348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.654373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.654495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.654520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.654652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.654694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.654868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.654908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.655909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.655936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.656040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.656067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e30000b90 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.656202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.656228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.656416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.656467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.656625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.656650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.656844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.656872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.657956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.657982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.658106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.658132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.658259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.658285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.658445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.658488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 Malloc0 00:32:48.118 [2024-10-07 09:53:42.658735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.658772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.658987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.659137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:48.118 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.118 [2024-10-07 09:53:42.659332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.118 [2024-10-07 09:53:42.659535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.659721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.659906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.659936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.660062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.660087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.660262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.660290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.660483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.660518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.660705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.660730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.660858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.660884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.661206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.118 [2024-10-07 09:53:42.661264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.118 qpair failed and we were unable to recover it. 00:32:48.118 [2024-10-07 09:53:42.661489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.661515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.661672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.661698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.661816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.661841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.661983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.119 [2024-10-07 09:53:42.662346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.662926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.662952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.663924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.663950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.664936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.664961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.665931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.665956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.666953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.666978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.667150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.667192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.667413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.667462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.667640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.667665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.667800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.667825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.119 qpair failed and we were unable to recover it. 00:32:48.119 [2024-10-07 09:53:42.668000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.119 [2024-10-07 09:53:42.668029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.668962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.668989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.669114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.669139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.669272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.669315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.669476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.669506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.669636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.669662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.669829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.669854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.670025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.670051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.670213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.670238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.670343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.670386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.670490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.120 [2024-10-07 09:53:42.670515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.120 [2024-10-07 09:53:42.670647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.670673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.120 [2024-10-07 09:53:42.670834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.670859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.120 [2024-10-07 09:53:42.671044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.671194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.671360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.671564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.671717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.671876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.671911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.672064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.672092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.672245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.672270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.672409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.672450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.672624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.672649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.672809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.672834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.673010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.673039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.673280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.673330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.673537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.673562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.673726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.120 [2024-10-07 09:53:42.673752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.120 qpair failed and we were unable to recover it. 00:32:48.120 [2024-10-07 09:53:42.673937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.673966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.674132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.674161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.674264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.674289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.674442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.674497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.674704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.674729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.674863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.674889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.675896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.675922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.676913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.676939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.677092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.677117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.677238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.677263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.677423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.677448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.677605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.677631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.677808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.677833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.678029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.678055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.678210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.678236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.678366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.678391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.121 [2024-10-07 09:53:42.678551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.678576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.121 [2024-10-07 09:53:42.678743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.678771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.121 [2024-10-07 09:53:42.678935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.121 [2024-10-07 09:53:42.678982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.679147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.679172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.679351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.679376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.679559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.679584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.679719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.679745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.679902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.679928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.680053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.680078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.680242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.680267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.121 [2024-10-07 09:53:42.680426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.121 [2024-10-07 09:53:42.680451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.121 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.680581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.680606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.680728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.680753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.680935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.680961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.681906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.681932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.682817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.682846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.683964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.683990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.684947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.684973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.685136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.685162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.685319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.685345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.685471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.685500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.685661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.685686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.685854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.685880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.686020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.686045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.686208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.686233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.686366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.686392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.122 [2024-10-07 09:53:42.686550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 [2024-10-07 09:53:42.686575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.122 [2024-10-07 09:53:42.686671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.122 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.122 [2024-10-07 09:53:42.686696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.122 qpair failed and we were unable to recover it. 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.123 [2024-10-07 09:53:42.686850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.686878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.123 [2024-10-07 09:53:42.687075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.687100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.687273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.687299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.687460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.687485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.687645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.687671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.687832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.687857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.688913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.688942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.689923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.689949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.690041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.690067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.690223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.690249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.690410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.123 [2024-10-07 09:53:42.690435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120f630 with addr=10.0.0.2, port=4420 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.690492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.123 [2024-10-07 09:53:42.693241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.123 [2024-10-07 09:53:42.693395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.123 [2024-10-07 09:53:42.693426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.123 [2024-10-07 09:53:42.693442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.123 [2024-10-07 09:53:42.693456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.123 [2024-10-07 09:53:42.693489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.123 09:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1667069 00:32:48.123 [2024-10-07 09:53:42.703069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.123 [2024-10-07 09:53:42.703199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.123 [2024-10-07 09:53:42.703226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.123 [2024-10-07 09:53:42.703241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.123 [2024-10-07 09:53:42.703254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.123 [2024-10-07 09:53:42.703285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.713113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.123 [2024-10-07 09:53:42.713234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.123 [2024-10-07 09:53:42.713268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.123 [2024-10-07 09:53:42.713284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.123 [2024-10-07 09:53:42.713297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.123 [2024-10-07 09:53:42.713326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.722995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.123 [2024-10-07 09:53:42.723147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.123 [2024-10-07 09:53:42.723172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.123 [2024-10-07 09:53:42.723187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.123 [2024-10-07 09:53:42.723200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.123 [2024-10-07 09:53:42.723228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.123 [2024-10-07 09:53:42.732961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.123 [2024-10-07 09:53:42.733071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.123 [2024-10-07 09:53:42.733097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.123 [2024-10-07 09:53:42.733112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.123 [2024-10-07 09:53:42.733124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.123 [2024-10-07 09:53:42.733153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.123 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.742963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.743077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.743102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.743117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.743130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.743159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.752991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.753111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.753136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.753151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.753163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.753198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.763037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.763188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.763213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.763228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.763241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.763270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.773111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.773233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.773259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.773275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.773288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.773316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.783143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.783252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.783279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.783293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.783306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.783334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.793190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.793334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.793359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.793385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.793399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.793427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.803167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.803283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.803313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.803328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.803341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.803370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.813229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.813368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.813393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.813408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.813420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.813449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.823208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.823320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.823346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.823360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.823373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.823401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.833235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.833346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.833372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.833386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.833399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.833427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.843272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.843389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.843415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.843428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.124 [2024-10-07 09:53:42.843441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.124 [2024-10-07 09:53:42.843475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.124 qpair failed and we were unable to recover it. 00:32:48.124 [2024-10-07 09:53:42.853361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.124 [2024-10-07 09:53:42.853471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.124 [2024-10-07 09:53:42.853497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.124 [2024-10-07 09:53:42.853511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.853524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.853552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.125 [2024-10-07 09:53:42.863348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.125 [2024-10-07 09:53:42.863469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.125 [2024-10-07 09:53:42.863494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.125 [2024-10-07 09:53:42.863509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.863522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.863551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.125 [2024-10-07 09:53:42.873368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.125 [2024-10-07 09:53:42.873472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.125 [2024-10-07 09:53:42.873498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.125 [2024-10-07 09:53:42.873512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.873525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.873553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.125 [2024-10-07 09:53:42.883379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.125 [2024-10-07 09:53:42.883504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.125 [2024-10-07 09:53:42.883529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.125 [2024-10-07 09:53:42.883543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.883556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.883585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.125 [2024-10-07 09:53:42.893393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.125 [2024-10-07 09:53:42.893505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.125 [2024-10-07 09:53:42.893536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.125 [2024-10-07 09:53:42.893551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.893564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.893592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.125 [2024-10-07 09:53:42.903460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.125 [2024-10-07 09:53:42.903571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.125 [2024-10-07 09:53:42.903596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.125 [2024-10-07 09:53:42.903611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.125 [2024-10-07 09:53:42.903624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.125 [2024-10-07 09:53:42.903659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.125 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.913465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.384 [2024-10-07 09:53:42.913576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.384 [2024-10-07 09:53:42.913601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.384 [2024-10-07 09:53:42.913616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.384 [2024-10-07 09:53:42.913628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.384 [2024-10-07 09:53:42.913656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.384 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.923538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.384 [2024-10-07 09:53:42.923653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.384 [2024-10-07 09:53:42.923679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.384 [2024-10-07 09:53:42.923693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.384 [2024-10-07 09:53:42.923706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.384 [2024-10-07 09:53:42.923733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.384 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.933555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.384 [2024-10-07 09:53:42.933667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.384 [2024-10-07 09:53:42.933692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.384 [2024-10-07 09:53:42.933706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.384 [2024-10-07 09:53:42.933719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.384 [2024-10-07 09:53:42.933754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.384 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.943625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.384 [2024-10-07 09:53:42.943738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.384 [2024-10-07 09:53:42.943771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.384 [2024-10-07 09:53:42.943786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.384 [2024-10-07 09:53:42.943798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.384 [2024-10-07 09:53:42.943836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.384 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.953594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.384 [2024-10-07 09:53:42.953707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.384 [2024-10-07 09:53:42.953733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.384 [2024-10-07 09:53:42.953747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.384 [2024-10-07 09:53:42.953760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.384 [2024-10-07 09:53:42.953788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.384 qpair failed and we were unable to recover it. 00:32:48.384 [2024-10-07 09:53:42.963654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:42.963791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:42.963817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:42.963831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:42.963844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:42.963872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:42.973644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:42.973757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:42.973783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:42.973797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:42.973810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:42.973838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:42.983731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:42.983859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:42.983901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:42.983920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:42.983934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:42.983962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:42.993696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:42.993805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:42.993830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:42.993844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:42.993857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:42.993889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.003722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.003838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.003864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.003879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.003898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.003928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.013764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.013877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.013909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.013924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.013938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.013966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.023787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.023903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.023929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.023944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.023963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.023991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.033883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.034008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.034034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.034048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.034061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.034089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.043848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.043969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.043995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.044009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.044022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.044050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.053911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.054019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.054044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.054059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.054072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.054101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.063903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.064019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.064044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.064059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.064071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.064101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.073956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.074077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.074103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.074117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.074130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.074158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.083961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.084078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.084103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.084118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.385 [2024-10-07 09:53:43.084130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.385 [2024-10-07 09:53:43.084159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.385 qpair failed and we were unable to recover it. 00:32:48.385 [2024-10-07 09:53:43.093991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.385 [2024-10-07 09:53:43.094104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.385 [2024-10-07 09:53:43.094129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.385 [2024-10-07 09:53:43.094144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.094157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.094185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.104039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.104169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.104195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.104209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.104222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.104250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.114059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.114182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.114207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.114221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.114240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.114269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.124127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.124241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.124266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.124281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.124294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.124322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.134102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.134218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.134245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.134259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.134271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.134299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.144177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.144298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.144323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.144337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.144350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.144378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.154175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.154291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.154317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.154331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.154344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.154372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.164243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.164366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.164392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.164406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.164420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.164448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.174219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.174331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.174357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.174371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.174384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.174412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.184245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.184350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.184377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.184392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.184405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.184434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.386 [2024-10-07 09:53:43.194270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.386 [2024-10-07 09:53:43.194378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.386 [2024-10-07 09:53:43.194403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.386 [2024-10-07 09:53:43.194418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.386 [2024-10-07 09:53:43.194430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.386 [2024-10-07 09:53:43.194457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.386 qpair failed and we were unable to recover it. 00:32:48.645 [2024-10-07 09:53:43.204338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.645 [2024-10-07 09:53:43.204467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.645 [2024-10-07 09:53:43.204493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.645 [2024-10-07 09:53:43.204507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.204526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.204557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.214397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.214506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.214532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.214546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.214559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.214597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.224393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.224510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.224535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.224550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.224563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.224591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.234409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.234516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.234541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.234556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.234569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.234597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.244449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.244568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.244592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.244605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.244617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.244645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.254492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.254617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.254642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.254672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.254687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.254733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.264533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.264676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.264703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.264718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.264730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.264758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.274504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.274624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.274650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.274664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.274678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.274707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.284598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.284715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.284741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.284756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.284769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.284797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.294616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.294731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.294757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.294771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.294790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.294819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.304669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.304777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.304802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.304817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.304830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.304859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.314659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.314769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.314795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.314810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.314823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.314851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.324776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.324922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.324953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.324968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.324981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.325009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.646 qpair failed and we were unable to recover it. 00:32:48.646 [2024-10-07 09:53:43.334743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.646 [2024-10-07 09:53:43.334863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.646 [2024-10-07 09:53:43.334888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.646 [2024-10-07 09:53:43.334915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.646 [2024-10-07 09:53:43.334928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.646 [2024-10-07 09:53:43.334961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.344708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.344850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.344876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.344898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.344913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.344945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.354755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.354869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.354903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.354919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.354932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.354961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.364839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.364965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.364991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.365005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.365018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.365048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.374801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.374900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.374941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.374956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.374970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.375000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.384824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.384943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.384969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.384989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.385002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.385031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.394855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.394973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.394999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.395013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.395026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.395055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.404916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.405033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.405058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.405072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.405085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.405114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.414925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.415034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.415059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.415073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.415086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.415114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.424959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.425067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.425093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.425107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.425120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.425149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.435037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.435169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.435195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.435209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.435222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.435250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.445045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.445162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.445188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.445202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.445214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.445243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.647 [2024-10-07 09:53:43.455038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.647 [2024-10-07 09:53:43.455184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.647 [2024-10-07 09:53:43.455210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.647 [2024-10-07 09:53:43.455224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.647 [2024-10-07 09:53:43.455236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.647 [2024-10-07 09:53:43.455264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.647 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.465109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.465228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.465254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.465269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.465282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.465310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.475271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.475405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.475431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.475458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.475472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.475502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.485267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.485385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.485411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.485425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.485437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.485467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.495269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.495381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.495407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.495422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.495435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.495462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.505235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.505342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.505368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.505382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.505395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.505422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.515301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.515409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.515434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.515448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.515461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.515489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.525274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.525391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.525417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.525431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.525443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.907 [2024-10-07 09:53:43.525471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.907 qpair failed and we were unable to recover it. 00:32:48.907 [2024-10-07 09:53:43.535367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.907 [2024-10-07 09:53:43.535527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.907 [2024-10-07 09:53:43.535552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.907 [2024-10-07 09:53:43.535567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.907 [2024-10-07 09:53:43.535580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.535607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.545356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.545467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.545491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.545506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.545519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.545547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.555345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.555461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.555487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.555501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.555514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.555555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.565406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.565527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.565552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.565573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.565586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.565614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.575508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.575618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.575644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.575658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.575671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.575699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.585413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.585523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.585549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.585564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.585576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.585604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.595480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.595587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.595612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.595626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.595639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.595667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.605537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.605657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.605684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.605698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.605710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.605739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.615521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.615636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.615662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.615677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.615690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.615718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.625554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.625664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.625690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.625704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.625718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.625746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.635578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.635689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.635715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.635731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.635743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.635771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.645654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.645775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.645801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.645815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.645828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.645856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.655676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.655788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.655814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.655835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.655848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.655877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.665762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.665887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.665930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.908 [2024-10-07 09:53:43.665945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.908 [2024-10-07 09:53:43.665958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.908 [2024-10-07 09:53:43.665987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.908 qpair failed and we were unable to recover it. 00:32:48.908 [2024-10-07 09:53:43.675688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.908 [2024-10-07 09:53:43.675801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.908 [2024-10-07 09:53:43.675826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.909 [2024-10-07 09:53:43.675841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.909 [2024-10-07 09:53:43.675854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.909 [2024-10-07 09:53:43.675882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.909 qpair failed and we were unable to recover it. 00:32:48.909 [2024-10-07 09:53:43.685783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.909 [2024-10-07 09:53:43.685929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.909 [2024-10-07 09:53:43.685955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.909 [2024-10-07 09:53:43.685969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.909 [2024-10-07 09:53:43.685982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.909 [2024-10-07 09:53:43.686011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.909 qpair failed and we were unable to recover it. 00:32:48.909 [2024-10-07 09:53:43.695767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.909 [2024-10-07 09:53:43.695881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.909 [2024-10-07 09:53:43.695914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.909 [2024-10-07 09:53:43.695929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.909 [2024-10-07 09:53:43.695942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.909 [2024-10-07 09:53:43.695971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.909 qpair failed and we were unable to recover it. 00:32:48.909 [2024-10-07 09:53:43.705804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.909 [2024-10-07 09:53:43.705931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.909 [2024-10-07 09:53:43.705956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.909 [2024-10-07 09:53:43.705970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.909 [2024-10-07 09:53:43.705982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.909 [2024-10-07 09:53:43.706010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.909 qpair failed and we were unable to recover it. 00:32:48.909 [2024-10-07 09:53:43.715797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.909 [2024-10-07 09:53:43.715918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.909 [2024-10-07 09:53:43.715944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.909 [2024-10-07 09:53:43.715959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.909 [2024-10-07 09:53:43.715971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:48.909 [2024-10-07 09:53:43.716000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.909 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.725841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.725985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.726010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.726025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.726037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.726066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.735878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.735994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.736020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.736034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.736047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.736075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.745884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.746000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.746030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.746046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.746059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.746087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.756021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.756149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.756175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.756189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.756203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.756231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.766016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.766131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.766155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.766170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.766183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.766211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.776023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.776165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.776190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.776204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.776217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.776245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.786044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.786168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.786193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.786207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.786220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.168 [2024-10-07 09:53:43.786249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.168 qpair failed and we were unable to recover it. 00:32:49.168 [2024-10-07 09:53:43.796083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.168 [2024-10-07 09:53:43.796213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.168 [2024-10-07 09:53:43.796238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.168 [2024-10-07 09:53:43.796252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.168 [2024-10-07 09:53:43.796266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.796294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.806174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.806294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.806319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.806334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.806347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.806375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.816102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.816217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.816243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.816257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.816270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.816299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.826140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.826263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.826288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.826302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.826315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.826343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.836214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.836334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.836365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.836380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.836393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.836421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.846201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.846346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.846371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.846385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.846399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.846427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.856241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.856384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.856409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.856423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.856435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.856464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.866308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.866416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.866442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.866456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.866469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.866504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.876269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.876382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.876408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.876422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.876435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.876469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.886371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.886485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.886510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.886524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.886538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.886566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.896354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.896465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.896491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.896505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.896517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.896545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.906348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.906454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.906480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.906495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.906508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.906536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.916443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.916552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.916577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.916592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.916605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.916632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.926410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.926535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.926566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.169 [2024-10-07 09:53:43.926581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.169 [2024-10-07 09:53:43.926594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.169 [2024-10-07 09:53:43.926623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.169 qpair failed and we were unable to recover it. 00:32:49.169 [2024-10-07 09:53:43.936478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.169 [2024-10-07 09:53:43.936593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.169 [2024-10-07 09:53:43.936619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.170 [2024-10-07 09:53:43.936633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.170 [2024-10-07 09:53:43.936646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.170 [2024-10-07 09:53:43.936684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.170 qpair failed and we were unable to recover it. 00:32:49.170 [2024-10-07 09:53:43.946518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.170 [2024-10-07 09:53:43.946648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.170 [2024-10-07 09:53:43.946674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.170 [2024-10-07 09:53:43.946688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.170 [2024-10-07 09:53:43.946701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.170 [2024-10-07 09:53:43.946730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.170 qpair failed and we were unable to recover it. 00:32:49.170 [2024-10-07 09:53:43.956507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.170 [2024-10-07 09:53:43.956622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.170 [2024-10-07 09:53:43.956648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.170 [2024-10-07 09:53:43.956662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.170 [2024-10-07 09:53:43.956675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.170 [2024-10-07 09:53:43.956704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.170 qpair failed and we were unable to recover it. 00:32:49.170 [2024-10-07 09:53:43.966614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.170 [2024-10-07 09:53:43.966732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.170 [2024-10-07 09:53:43.966758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.170 [2024-10-07 09:53:43.966772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.170 [2024-10-07 09:53:43.966784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.170 [2024-10-07 09:53:43.966819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.170 qpair failed and we were unable to recover it. 00:32:49.170 [2024-10-07 09:53:43.976627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.170 [2024-10-07 09:53:43.976753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.170 [2024-10-07 09:53:43.976779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.170 [2024-10-07 09:53:43.976793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.170 [2024-10-07 09:53:43.976806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.170 [2024-10-07 09:53:43.976834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.170 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:43.986567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:43.986683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:43.986708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:43.986722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:43.986735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:43.986763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:43.996584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:43.996719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:43.996744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:43.996759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:43.996772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:43.996801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.006673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:44.006791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:44.006816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:44.006830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:44.006843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:44.006872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.016696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:44.016811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:44.016843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:44.016858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:44.016872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:44.016907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.026687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:44.026797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:44.026822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:44.026837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:44.026850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:44.026878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.036718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:44.036827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:44.036853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:44.036867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:44.036880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:44.036915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.046762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.429 [2024-10-07 09:53:44.046888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.429 [2024-10-07 09:53:44.046922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.429 [2024-10-07 09:53:44.046936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.429 [2024-10-07 09:53:44.046949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.429 [2024-10-07 09:53:44.046977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.429 qpair failed and we were unable to recover it. 00:32:49.429 [2024-10-07 09:53:44.056794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.056929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.056956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.056970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.056983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.057017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.066844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.066986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.067012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.067026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.067040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.067069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.076918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.077023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.077049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.077063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.077076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.077105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.086872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.087012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.087037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.087052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.087064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.087093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.096903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.097025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.097050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.097065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.097077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.097106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.106975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.107083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.107113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.107128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.107142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.107172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.116976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.117086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.117112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.117127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.117139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.117168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.127018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.127157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.127183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.127197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.127210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.127238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.137051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.137161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.137186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.137201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.137214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.137250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.147078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.147189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.147214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.147229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.147242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.147276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.157067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.157177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.157203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.157218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.157231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.157258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.167118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.167235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.167260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.167275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.167288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.167317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.177181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.177303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.177329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.177343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.430 [2024-10-07 09:53:44.177356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.430 [2024-10-07 09:53:44.177384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.430 qpair failed and we were unable to recover it. 00:32:49.430 [2024-10-07 09:53:44.187156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.430 [2024-10-07 09:53:44.187271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.430 [2024-10-07 09:53:44.187297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.430 [2024-10-07 09:53:44.187311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.187325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.187353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.431 [2024-10-07 09:53:44.197212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.431 [2024-10-07 09:53:44.197328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.431 [2024-10-07 09:53:44.197360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.431 [2024-10-07 09:53:44.197376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.197388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.197416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.431 [2024-10-07 09:53:44.207270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.431 [2024-10-07 09:53:44.207411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.431 [2024-10-07 09:53:44.207436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.431 [2024-10-07 09:53:44.207450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.207463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.207491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.431 [2024-10-07 09:53:44.217292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.431 [2024-10-07 09:53:44.217398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.431 [2024-10-07 09:53:44.217424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.431 [2024-10-07 09:53:44.217438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.217451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.217479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.431 [2024-10-07 09:53:44.227291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.431 [2024-10-07 09:53:44.227397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.431 [2024-10-07 09:53:44.227424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.431 [2024-10-07 09:53:44.227438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.227451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.227480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.431 [2024-10-07 09:53:44.237299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.431 [2024-10-07 09:53:44.237410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.431 [2024-10-07 09:53:44.237436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.431 [2024-10-07 09:53:44.237450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.431 [2024-10-07 09:53:44.237469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.431 [2024-10-07 09:53:44.237498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.431 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.247385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.247518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.247542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.247556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.247568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.247597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.257362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.257477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.257502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.257517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.257529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.257557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.267399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.267524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.267550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.267564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.267577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.267605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.277434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.277557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.277583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.277598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.277611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.277639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.287484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.287614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.287639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.287653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.287666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.287694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.297481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.297604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.297629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.297644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.297657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.297685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.307485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.307598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.307624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.307638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.307651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.307679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.317627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.317733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.317759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.317774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.317787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.317814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.327604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.327720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.327745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.327760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.327778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.327807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.337631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.337763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.337789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.337803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.337816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.337844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.347687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.347798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.347824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.347838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.347851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.347880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.357633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.691 [2024-10-07 09:53:44.357743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.691 [2024-10-07 09:53:44.357769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.691 [2024-10-07 09:53:44.357784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.691 [2024-10-07 09:53:44.357797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.691 [2024-10-07 09:53:44.357825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.691 qpair failed and we were unable to recover it. 00:32:49.691 [2024-10-07 09:53:44.367727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.367843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.367868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.367882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.367903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.367933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.377691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.377805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.377831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.377845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.377858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.377886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.387731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.387845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.387871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.387885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.387907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.387937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.397786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.397902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.397927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.397942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.397956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.397985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.407833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.407961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.407987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.408001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.408014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.408042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.417875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.418001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.418026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.418040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.418058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.418087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.427875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.427997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.428022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.428037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.428050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.428078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.437860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.437972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.437999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.438013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.438026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.438055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.447941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.448059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.448085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.448100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.448113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.448141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.457917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.458033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.458059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.458073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.458085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.458113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.468036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.468169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.468202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.468216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.468229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.468268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.477978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.478086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.478113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.478127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.478140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.478168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.488013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.488146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.488171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.488186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.692 [2024-10-07 09:53:44.488198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.692 [2024-10-07 09:53:44.488227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.692 qpair failed and we were unable to recover it. 00:32:49.692 [2024-10-07 09:53:44.498027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.692 [2024-10-07 09:53:44.498146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.692 [2024-10-07 09:53:44.498172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.692 [2024-10-07 09:53:44.498186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.693 [2024-10-07 09:53:44.498198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.693 [2024-10-07 09:53:44.498226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.693 qpair failed and we were unable to recover it. 00:32:49.954 [2024-10-07 09:53:44.508038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.954 [2024-10-07 09:53:44.508145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.954 [2024-10-07 09:53:44.508170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.954 [2024-10-07 09:53:44.508185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.954 [2024-10-07 09:53:44.508203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.954 [2024-10-07 09:53:44.508233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.954 qpair failed and we were unable to recover it. 00:32:49.954 [2024-10-07 09:53:44.518166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.954 [2024-10-07 09:53:44.518277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.954 [2024-10-07 09:53:44.518302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.954 [2024-10-07 09:53:44.518316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.954 [2024-10-07 09:53:44.518329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.954 [2024-10-07 09:53:44.518356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.954 qpair failed and we were unable to recover it. 00:32:49.954 [2024-10-07 09:53:44.528117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.954 [2024-10-07 09:53:44.528236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.954 [2024-10-07 09:53:44.528262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.528276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.528288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.528316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.538152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.538274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.538299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.538314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.538325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.538357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.548153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.548266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.548291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.548306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.548319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.548347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.558176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.558285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.558311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.558325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.558339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.558367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.568257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.568375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.568401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.568415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.568428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.568455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.578276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.578388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.578414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.578428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.578442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.578469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.588397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.588515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.588541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.588555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.588568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.588595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.598295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.598422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.598447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.598468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.598481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.598509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.608361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.608479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.608505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.608519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.608532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.608559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.618384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.618510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.618536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.618550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.618563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.618591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.628440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.628564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.628589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.628603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.628616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.628647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.638444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.638548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.638574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.638588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.638601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.638629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.648504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.648640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.648665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.648680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.648693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.648720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.955 [2024-10-07 09:53:44.658507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.955 [2024-10-07 09:53:44.658620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.955 [2024-10-07 09:53:44.658646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.955 [2024-10-07 09:53:44.658660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.955 [2024-10-07 09:53:44.658673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.955 [2024-10-07 09:53:44.658702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.955 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.668573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.668707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.668733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.668747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.668760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.668788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.678552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.678686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.678712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.678726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.678739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.678767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.688562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.688681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.688706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.688727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.688740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.688769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.698590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.698699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.698724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.698739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.698752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.698780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.708680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.708786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.708811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.708825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.708838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.708876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.718664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.718779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.718804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.718818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.718830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.718859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.728739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.728866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.728899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.728915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.728928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.728956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.738742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.738854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.738880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.738903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.738916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.738951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.748724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.748834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.748860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.748874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.748887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.748927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:49.956 [2024-10-07 09:53:44.758774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.956 [2024-10-07 09:53:44.758914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.956 [2024-10-07 09:53:44.758949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.956 [2024-10-07 09:53:44.758964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.956 [2024-10-07 09:53:44.758976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:49.956 [2024-10-07 09:53:44.759005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.956 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.768796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.768921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.768947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.768960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.768972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.769001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.778879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.779014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.779040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.779061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.779074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.779103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.788871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.788992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.789017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.789032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.789045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.789082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.798848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.798962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.798988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.799002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.799015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.799043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.808910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.809031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.809056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.809070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.809084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.809113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.819039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.819182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.819207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.819221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.819234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.819264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.828946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.829054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.829079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.829094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.829107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.829135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.838986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.839095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.839120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.839134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.839147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.839176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.849071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.849185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.849211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.849225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.849238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.849266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.859119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.859258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.859284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.859298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.859311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.859339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.869083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.869188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.869214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.869235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.869260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.869288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.879107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.217 [2024-10-07 09:53:44.879242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.217 [2024-10-07 09:53:44.879268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.217 [2024-10-07 09:53:44.879282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.217 [2024-10-07 09:53:44.879296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.217 [2024-10-07 09:53:44.879324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.217 qpair failed and we were unable to recover it. 00:32:50.217 [2024-10-07 09:53:44.889164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.889320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.889345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.889360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.889373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.889404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.899151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.899266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.899291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.899306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.899319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.899347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.909235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.909352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.909377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.909392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.909404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.909432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.919269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.919377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.919403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.919417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.919430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.919458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.929311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.929449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.929474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.929488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.929501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.929529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.939288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.939405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.939430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.939444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.939457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.939485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.949427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.949534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.949559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.949573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.949586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.949615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.959364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.959488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.959519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.959534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.959548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.959576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.969439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.969566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.969590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.969604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.969617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.969650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.979436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.979551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.979576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.979590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.979603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.979631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.989404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.989519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.989544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.989557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.989570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.989599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:44.999460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:44.999578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:44.999603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:44.999617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:44.999630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:44.999658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:45.009491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:45.009606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:45.009633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:45.009647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.218 [2024-10-07 09:53:45.009660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.218 [2024-10-07 09:53:45.009688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.218 qpair failed and we were unable to recover it. 00:32:50.218 [2024-10-07 09:53:45.019535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.218 [2024-10-07 09:53:45.019653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.218 [2024-10-07 09:53:45.019678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.218 [2024-10-07 09:53:45.019693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.219 [2024-10-07 09:53:45.019706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.219 [2024-10-07 09:53:45.019734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.219 qpair failed and we were unable to recover it. 00:32:50.219 [2024-10-07 09:53:45.029573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.219 [2024-10-07 09:53:45.029682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.219 [2024-10-07 09:53:45.029707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.219 [2024-10-07 09:53:45.029721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.219 [2024-10-07 09:53:45.029734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.219 [2024-10-07 09:53:45.029763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.219 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.039554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.039665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.039691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.039705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.039718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.039747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.049634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.049755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.049786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.049801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.049814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.049843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.059617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.059732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.059758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.059773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.059786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.059814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.069695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.069811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.069836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.069850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.069863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.069902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.079682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.079800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.079825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.079839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.079852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.079881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.089811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.089939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.089965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.089979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.089992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.090027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.099776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.099889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.099921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.099935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.099948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.099985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.109769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.109876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.109909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.478 [2024-10-07 09:53:45.109925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.478 [2024-10-07 09:53:45.109938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.478 [2024-10-07 09:53:45.109966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.478 qpair failed and we were unable to recover it. 00:32:50.478 [2024-10-07 09:53:45.119782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.478 [2024-10-07 09:53:45.119886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.478 [2024-10-07 09:53:45.119920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.119935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.119948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.119976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.129909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.130036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.130061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.130075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.130088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.130117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.139875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.140020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.140051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.140066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.140079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.140108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.149921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.150054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.150080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.150094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.150107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.150135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.159983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.160094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.160119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.160134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.160147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.160175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.169988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.170115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.170140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.170154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.170168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.170196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.179953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.180067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.180092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.180106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.180119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.180154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.190073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.190185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.190210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.190225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.190238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.190267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.200035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.200147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.200172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.200187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.200199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.200226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.210134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.210298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.210323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.210337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.210350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.210378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.220124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.220268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.220294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.220309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.220321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.220349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.230124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.230237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.230268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.230283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.230296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.230325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.240142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.240253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.240278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.240292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.240306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.240334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.250243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.479 [2024-10-07 09:53:45.250368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.479 [2024-10-07 09:53:45.250392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.479 [2024-10-07 09:53:45.250406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.479 [2024-10-07 09:53:45.250417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.479 [2024-10-07 09:53:45.250446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.479 qpair failed and we were unable to recover it. 00:32:50.479 [2024-10-07 09:53:45.260230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.480 [2024-10-07 09:53:45.260348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.480 [2024-10-07 09:53:45.260374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.480 [2024-10-07 09:53:45.260388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.480 [2024-10-07 09:53:45.260401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.480 [2024-10-07 09:53:45.260437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.480 qpair failed and we were unable to recover it. 00:32:50.480 [2024-10-07 09:53:45.270247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.480 [2024-10-07 09:53:45.270355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.480 [2024-10-07 09:53:45.270380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.480 [2024-10-07 09:53:45.270394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.480 [2024-10-07 09:53:45.270407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.480 [2024-10-07 09:53:45.270442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.480 qpair failed and we were unable to recover it. 00:32:50.480 [2024-10-07 09:53:45.280305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.480 [2024-10-07 09:53:45.280411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.480 [2024-10-07 09:53:45.280437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.480 [2024-10-07 09:53:45.280451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.480 [2024-10-07 09:53:45.280464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.480 [2024-10-07 09:53:45.280492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.480 qpair failed and we were unable to recover it. 00:32:50.480 [2024-10-07 09:53:45.290369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.480 [2024-10-07 09:53:45.290488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.480 [2024-10-07 09:53:45.290513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.480 [2024-10-07 09:53:45.290527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.480 [2024-10-07 09:53:45.290540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.480 [2024-10-07 09:53:45.290568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.480 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.300304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.300421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.300446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.300460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.300473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.300501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.310352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.310471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.310496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.310510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.310523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.310551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.320363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.320473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.320503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.320518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.320531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.320559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.330411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.330526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.330551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.330565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.330578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.330606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.340433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.340540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.340565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.340579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.340593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.340621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.350477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.350582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.350607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.350622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.350634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.350663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.360494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.360609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.360635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.360650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.360662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.360697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.370585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.370712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.370737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.370751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.370764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.739 [2024-10-07 09:53:45.370793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.739 qpair failed and we were unable to recover it. 00:32:50.739 [2024-10-07 09:53:45.380609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.739 [2024-10-07 09:53:45.380723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.739 [2024-10-07 09:53:45.380748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.739 [2024-10-07 09:53:45.380763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.739 [2024-10-07 09:53:45.380775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.380804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.390586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.390697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.390722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.390737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.390750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.390778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.400655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.400768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.400793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.400807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.400820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.400848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.410737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.410851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.410881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.410907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.410922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.410951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.420642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.420752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.420777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.420792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.420805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.420833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.430731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.430865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.430898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.430916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.430930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.430959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.440753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.440911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.440938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.440953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.440966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.440994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.450793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.450914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.450940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.450954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.450981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.451010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.460746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.460865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.460897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.460915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.460928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.460957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.470796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.470953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.470979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.470994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.471013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.471043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.480986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.481121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.481146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.481160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.481173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.481202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.490958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.491086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.491111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.491125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.491145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.491174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.501024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.501143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.501169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.501183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.501196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.501225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.510992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.511101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.740 [2024-10-07 09:53:45.511125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.740 [2024-10-07 09:53:45.511140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.740 [2024-10-07 09:53:45.511153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.740 [2024-10-07 09:53:45.511181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.740 qpair failed and we were unable to recover it. 00:32:50.740 [2024-10-07 09:53:45.520955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.740 [2024-10-07 09:53:45.521063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.741 [2024-10-07 09:53:45.521089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.741 [2024-10-07 09:53:45.521103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.741 [2024-10-07 09:53:45.521116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.741 [2024-10-07 09:53:45.521145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.741 qpair failed and we were unable to recover it. 00:32:50.741 [2024-10-07 09:53:45.531029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.741 [2024-10-07 09:53:45.531143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.741 [2024-10-07 09:53:45.531168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.741 [2024-10-07 09:53:45.531182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.741 [2024-10-07 09:53:45.531196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.741 [2024-10-07 09:53:45.531224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.741 qpair failed and we were unable to recover it. 00:32:50.741 [2024-10-07 09:53:45.540975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.741 [2024-10-07 09:53:45.541086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.741 [2024-10-07 09:53:45.541112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.741 [2024-10-07 09:53:45.541127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.741 [2024-10-07 09:53:45.541145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.741 [2024-10-07 09:53:45.541175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.741 qpair failed and we were unable to recover it. 00:32:50.741 [2024-10-07 09:53:45.551017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.741 [2024-10-07 09:53:45.551128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.741 [2024-10-07 09:53:45.551153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.741 [2024-10-07 09:53:45.551168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.741 [2024-10-07 09:53:45.551182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:50.741 [2024-10-07 09:53:45.551210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.741 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.561087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.561206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.561231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.561246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.561258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.561286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.571104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.571234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.571260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.571274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.571286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.571315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.581153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.581263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.581289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.581304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.581317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.581345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.591147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.591260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.591285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.591299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.591312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.591340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.601151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.601257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.601282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.601296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.601309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.601337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.611227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.611344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.611369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.611384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.611396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.611424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.621214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.621324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.621350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.621365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.621378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.621406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.631252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.631361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.631386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.631400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.631419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.631449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.641278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.641397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.641423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.641437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.641450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.641478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.651353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.651475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.651501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.651515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.651527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.651555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.661329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.661438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.661464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.661478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.661492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.661520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.671356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.671462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.671487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.671502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.671515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.001 [2024-10-07 09:53:45.671543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.001 qpair failed and we were unable to recover it. 00:32:51.001 [2024-10-07 09:53:45.681389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.001 [2024-10-07 09:53:45.681514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.001 [2024-10-07 09:53:45.681540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.001 [2024-10-07 09:53:45.681555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.001 [2024-10-07 09:53:45.681568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.681596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.691441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.691556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.691581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.691596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.691609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.691638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.701437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.701548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.701574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.701588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.701602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.701630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.711504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.711619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.711645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.711659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.711672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.711700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.721493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.721599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.721625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.721639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.721657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.721697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.731537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.731654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.731679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.731694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.731707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.731739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.741564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.741673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.741699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.741713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.741727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.741755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.751577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.751687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.751713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.751728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.751740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.751769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.761619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.761731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.761757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.761771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.761784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.761812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.771683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.771797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.771822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.771836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.771849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.771878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.781667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.781777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.781802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.781816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.781829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.781857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.791727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.791847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.791872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.791886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.791907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.791937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.801808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.801935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.801961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.801975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.801988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.802017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.002 [2024-10-07 09:53:45.811812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.002 [2024-10-07 09:53:45.811936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.002 [2024-10-07 09:53:45.811962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.002 [2024-10-07 09:53:45.811985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.002 [2024-10-07 09:53:45.811999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.002 [2024-10-07 09:53:45.812028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.002 qpair failed and we were unable to recover it. 00:32:51.262 [2024-10-07 09:53:45.821942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.262 [2024-10-07 09:53:45.822070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.262 [2024-10-07 09:53:45.822096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.262 [2024-10-07 09:53:45.822110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.262 [2024-10-07 09:53:45.822123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.262 [2024-10-07 09:53:45.822152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.262 qpair failed and we were unable to recover it. 00:32:51.262 [2024-10-07 09:53:45.831925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.832040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.832066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.832080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.832093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.832121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.841884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.841999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.842025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.842039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.842052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.842080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.851963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.852104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.852129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.852144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.852157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.852185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.861963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.862081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.862106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.862121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.862133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.862161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.871966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.872084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.872109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.872123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.872136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.872164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.882015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.882144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.882169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.882183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.882196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.882224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.892041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.892188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.892214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.892229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.892241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.892270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.902081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.902194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.902219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.902240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.902254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.902282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.912093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.912201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.912225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.912240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.912253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.912281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.922107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.922221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.922247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.922261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.922273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.922301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.932152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.932308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.932333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.932348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.932360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.932388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.942191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.942301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.942327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.942341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.942354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.942388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.952205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.952314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.952338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.263 [2024-10-07 09:53:45.952352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.263 [2024-10-07 09:53:45.952365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.263 [2024-10-07 09:53:45.952393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.263 qpair failed and we were unable to recover it. 00:32:51.263 [2024-10-07 09:53:45.962204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.263 [2024-10-07 09:53:45.962325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.263 [2024-10-07 09:53:45.962351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:45.962365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:45.962377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:45.962406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:45.972278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:45.972392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:45.972417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:45.972431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:45.972444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:45.972472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:45.982303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:45.982429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:45.982455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:45.982469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:45.982482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:45.982512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:45.992292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:45.992403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:45.992429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:45.992450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:45.992464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:45.992493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.002309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.002415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.002440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.002454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.002467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.002496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.012412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.012534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.012559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.012574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.012597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.012626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.022388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.022512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.022538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.022552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.022565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.022594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.032428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.032544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.032569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.032583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.032597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.032625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.042462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.042586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.042612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.042626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.042639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.042668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.052502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.052620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.052646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.052660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.052673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.052701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.062533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.062667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.062693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.062707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.062721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.062749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.264 [2024-10-07 09:53:46.072542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.264 [2024-10-07 09:53:46.072659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.264 [2024-10-07 09:53:46.072685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.264 [2024-10-07 09:53:46.072699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.264 [2024-10-07 09:53:46.072712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.264 [2024-10-07 09:53:46.072740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.264 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.082536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.082642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.082667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.082687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.082701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.082729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.092633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.092747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.092773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.092787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.092800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.092830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.102594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.102700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.102726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.102740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.102753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.102782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.112641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.112766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.112791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.112805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.112818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.112846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.122674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.122779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.122805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.122819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.122832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.122860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.132749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.132870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.132903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.132920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.132933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.132962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.142737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.142851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.142876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.142897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.142912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.142941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.152835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.152971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.152996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.153011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.153023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.153054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.162765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.162910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.162936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.162950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.162963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.162991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.524 qpair failed and we were unable to recover it. 00:32:51.524 [2024-10-07 09:53:46.172831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.524 [2024-10-07 09:53:46.172984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.524 [2024-10-07 09:53:46.173015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.524 [2024-10-07 09:53:46.173031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.524 [2024-10-07 09:53:46.173043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.524 [2024-10-07 09:53:46.173072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.182862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.182997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.183022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.183037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.183050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.183078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.192909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.193027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.193052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.193067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.193080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.193109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.202925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.203033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.203058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.203073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.203085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.203114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.213025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.213140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.213165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.213178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.213191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.213220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.223048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.223171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.223197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.223211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.223224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.223253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.233060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.233176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.233202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.233216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.233229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.233258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.243020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.243153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.243179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.243193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.243205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.243233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.253081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.253218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.253241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.253255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.253268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.253296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.263086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.263180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.263210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.263225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.263237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.263266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.273111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.273236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.273261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.273275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.273289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.273317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.283117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.283228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.283253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.283268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.283281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.283309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.293181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.293299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.293324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.293338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.293351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.293379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.303250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.303360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.525 [2024-10-07 09:53:46.303387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.525 [2024-10-07 09:53:46.303401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.525 [2024-10-07 09:53:46.303414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.525 [2024-10-07 09:53:46.303448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.525 qpair failed and we were unable to recover it. 00:32:51.525 [2024-10-07 09:53:46.313226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.525 [2024-10-07 09:53:46.313339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.526 [2024-10-07 09:53:46.313364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.526 [2024-10-07 09:53:46.313378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.526 [2024-10-07 09:53:46.313391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.526 [2024-10-07 09:53:46.313420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.526 qpair failed and we were unable to recover it. 00:32:51.526 [2024-10-07 09:53:46.323284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.526 [2024-10-07 09:53:46.323400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.526 [2024-10-07 09:53:46.323426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.526 [2024-10-07 09:53:46.323440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.526 [2024-10-07 09:53:46.323453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.526 [2024-10-07 09:53:46.323481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.526 qpair failed and we were unable to recover it. 00:32:51.526 [2024-10-07 09:53:46.333288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.526 [2024-10-07 09:53:46.333409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.526 [2024-10-07 09:53:46.333434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.526 [2024-10-07 09:53:46.333449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.526 [2024-10-07 09:53:46.333461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.526 [2024-10-07 09:53:46.333490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.526 qpair failed and we were unable to recover it. 00:32:51.785 [2024-10-07 09:53:46.343306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.785 [2024-10-07 09:53:46.343434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.785 [2024-10-07 09:53:46.343459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.785 [2024-10-07 09:53:46.343473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.785 [2024-10-07 09:53:46.343485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.785 [2024-10-07 09:53:46.343514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.785 qpair failed and we were unable to recover it. 00:32:51.785 [2024-10-07 09:53:46.353324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.785 [2024-10-07 09:53:46.353436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.785 [2024-10-07 09:53:46.353467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.785 [2024-10-07 09:53:46.353482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.785 [2024-10-07 09:53:46.353494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.785 [2024-10-07 09:53:46.353525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.785 qpair failed and we were unable to recover it. 00:32:51.785 [2024-10-07 09:53:46.363354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.785 [2024-10-07 09:53:46.363466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.785 [2024-10-07 09:53:46.363492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.785 [2024-10-07 09:53:46.363506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.785 [2024-10-07 09:53:46.363520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.785 [2024-10-07 09:53:46.363548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.785 qpair failed and we were unable to recover it. 00:32:51.785 [2024-10-07 09:53:46.373408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.785 [2024-10-07 09:53:46.373530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.785 [2024-10-07 09:53:46.373556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.785 [2024-10-07 09:53:46.373570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.785 [2024-10-07 09:53:46.373583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.785 [2024-10-07 09:53:46.373611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.785 qpair failed and we were unable to recover it. 00:32:51.785 [2024-10-07 09:53:46.383426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.785 [2024-10-07 09:53:46.383539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.785 [2024-10-07 09:53:46.383565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.785 [2024-10-07 09:53:46.383579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.785 [2024-10-07 09:53:46.383592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.785 [2024-10-07 09:53:46.383621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.393457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.393579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.393604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.393619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.393632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.393665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.403501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.403612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.403638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.403651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.403665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.403693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.413591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.413727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.413752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.413765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.413778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.413806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.423611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.423753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.423779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.423794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.423806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.423834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.433579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.433702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.433727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.433742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.433754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.433783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.443589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.443698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.443729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.443745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.443758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.443786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.453635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.453756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.453781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.453795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.453808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.453836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.463670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.463780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.463806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.463820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.463833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.463863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.473663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.473769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.473795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.473826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.473841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.473886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.483722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.483842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.483867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.483881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.483902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.483941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.493800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.493932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.493958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.493972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.493985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.494013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.503862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.503981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.504007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.504021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.504033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.504060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.513829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.513945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.513972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.786 [2024-10-07 09:53:46.513986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.786 [2024-10-07 09:53:46.513999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.786 [2024-10-07 09:53:46.514028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.786 qpair failed and we were unable to recover it. 00:32:51.786 [2024-10-07 09:53:46.523913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.786 [2024-10-07 09:53:46.524031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.786 [2024-10-07 09:53:46.524056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.524070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.524083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.524111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.533918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.534062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.534092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.534107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.534120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.534149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.543927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.544045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.544070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.544085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.544098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.544126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.553878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.553999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.554024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.554038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.554051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.554080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.563961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.564068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.564094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.564108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.564121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.564149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.574027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.574141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.574166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.574180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.574193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.574236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.584021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.584158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.584182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.584202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.584215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.584243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:51.787 [2024-10-07 09:53:46.594045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.787 [2024-10-07 09:53:46.594154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.787 [2024-10-07 09:53:46.594180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.787 [2024-10-07 09:53:46.594193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.787 [2024-10-07 09:53:46.594206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:51.787 [2024-10-07 09:53:46.594241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.787 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.604040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.604147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.604173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.604187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.604200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.604229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.614103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.614221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.614247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.614262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.614275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.614304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.624119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.624228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.624258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.624273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.624287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.624316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.634130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.634267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.634292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.634306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.634319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.634347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.644151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.644284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.644310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.644325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.644337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.644365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.654272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.654386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.654410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.654425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.654437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.654465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.664207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.664316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.664342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.664356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.664374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.664403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.674255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.674368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.674393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.674407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.674420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.674449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.684314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.684452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.684477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.684492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.684504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.684532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.694295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.694417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.694442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.694456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.694470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.694498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.704351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.704487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.704512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.704526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.704539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.704568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.714376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.714492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.714518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.714532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.714545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.714582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.724368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.047 [2024-10-07 09:53:46.724476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.047 [2024-10-07 09:53:46.724502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.047 [2024-10-07 09:53:46.724517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.047 [2024-10-07 09:53:46.724530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.047 [2024-10-07 09:53:46.724559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-10-07 09:53:46.734420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.734544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.734569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.734583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.734596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.734625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.744435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.744543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.744569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.744583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.744596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.744624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.754472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.754580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.754605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.754620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.754638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.754666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.764517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.764630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.764656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.764671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.764684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.764711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.774544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.774672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.774697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.774711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.774724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.774752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.784567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.784681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.784706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.784721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.784733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.784762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.794601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.794721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.794747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.794761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.794774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.794805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.804579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.804669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.804694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.804708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.804721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.804749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.814673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.814789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.814814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.814828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.814841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.814869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.824690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.824817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.824843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.824858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.824871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.824907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.834745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.834857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.834881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.834905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.834919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.834957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.844796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.844928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.844953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.844967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.844986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.845015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-10-07 09:53:46.854794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.048 [2024-10-07 09:53:46.854927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.048 [2024-10-07 09:53:46.854952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.048 [2024-10-07 09:53:46.854966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.048 [2024-10-07 09:53:46.854979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.048 [2024-10-07 09:53:46.855010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.864784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.864900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.864926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.864941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.864955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.864986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.874808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.874931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.874957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.874971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.874985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.875013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.884859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.884976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.885002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.885016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.885029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.885056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.894924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.895094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.895119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.895134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.895147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.895175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.904916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.905050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.905076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.905090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.905103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.905131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.915026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.915141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.915166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.915180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.915193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.915222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.924949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.925060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.925085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.308 [2024-10-07 09:53:46.925099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.308 [2024-10-07 09:53:46.925112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.308 [2024-10-07 09:53:46.925141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.308 qpair failed and we were unable to recover it. 00:32:52.308 [2024-10-07 09:53:46.935041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.308 [2024-10-07 09:53:46.935179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.308 [2024-10-07 09:53:46.935205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.935229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.935248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.935280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.945151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.945272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.945297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.945312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.945325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.945353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.955077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.955202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.955227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.955241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.955255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.955285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.965059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.965179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.965204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.965218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.965231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.965259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.975196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.975329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.975353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.975368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.975381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.975410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.985145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.985260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.985285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.985299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.985312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.985340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:46.995234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:46.995387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:46.995413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:46.995428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:46.995441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:46.995470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.005160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.005282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.005308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.005322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:47.005335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:47.005363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.015313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.015464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.015489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.015503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:47.015517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:47.015546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.025220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.025334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.025360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.025381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:47.025394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:47.025422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.035260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.035412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.035437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.035451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:47.035464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:47.035492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.045273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.045385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.045411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.045425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.309 [2024-10-07 09:53:47.045438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.309 [2024-10-07 09:53:47.045467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.309 qpair failed and we were unable to recover it. 00:32:52.309 [2024-10-07 09:53:47.055324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.309 [2024-10-07 09:53:47.055441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.309 [2024-10-07 09:53:47.055467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.309 [2024-10-07 09:53:47.055481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.055493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.055522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.065383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.065497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.065522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.065536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.065549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.065578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.075383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.075497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.075523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.075537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.075550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.075580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.085379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.085486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.085512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.085526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.085539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.085567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.095432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.095547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.095573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.095586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.095599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.095628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.105493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.105600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.105625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.105640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.105653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.105681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.310 [2024-10-07 09:53:47.115477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.310 [2024-10-07 09:53:47.115582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.310 [2024-10-07 09:53:47.115607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.310 [2024-10-07 09:53:47.115627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.310 [2024-10-07 09:53:47.115641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.310 [2024-10-07 09:53:47.115671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.310 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.125542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.125683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.125709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.125724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.125736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.125764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.135603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.135752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.135777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.135792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.135804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.135833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.145577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.145689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.145715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.145729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.145742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.145770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.155638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.155760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.155786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.155800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.155813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.155841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.165625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.165758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.165783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.165798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.165810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.165839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.175716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.175830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.175855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.175869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.175882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.175929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.185739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.185858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.185883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.185909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.185923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.185951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.195736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.571 [2024-10-07 09:53:47.195851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.571 [2024-10-07 09:53:47.195877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.571 [2024-10-07 09:53:47.195899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.571 [2024-10-07 09:53:47.195913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.571 [2024-10-07 09:53:47.195942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.571 qpair failed and we were unable to recover it. 00:32:52.571 [2024-10-07 09:53:47.205825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.205938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.205964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.205983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.205996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.206024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.215792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.215913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.215938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.215953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.215966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.215995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.225872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.225995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.226021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.226036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.226048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.226077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.235877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.235999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.236025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.236039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.236052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.236091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.245883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.246011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.246036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.246051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.246064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.246092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.255980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.256101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.256125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.256139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.256150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.256178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.266001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.266094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.266120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.266135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.266147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.266175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.275949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.276063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.276088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.276102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.276115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.276145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.285984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.286097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.286123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.286137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.286150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.286178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.296007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.296118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.296143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.296166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.296180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.296209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.306042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.306152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.306177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.306191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.306204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.306232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.316114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.316220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.316244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.316259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.316272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.316300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.326099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.326208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.326234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.326248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.572 [2024-10-07 09:53:47.326261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.572 [2024-10-07 09:53:47.326289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.572 qpair failed and we were unable to recover it. 00:32:52.572 [2024-10-07 09:53:47.336181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.572 [2024-10-07 09:53:47.336308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.572 [2024-10-07 09:53:47.336333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.572 [2024-10-07 09:53:47.336348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.573 [2024-10-07 09:53:47.336361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.573 [2024-10-07 09:53:47.336389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.573 qpair failed and we were unable to recover it. 00:32:52.573 [2024-10-07 09:53:47.346150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.573 [2024-10-07 09:53:47.346272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.573 [2024-10-07 09:53:47.346298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.573 [2024-10-07 09:53:47.346313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.573 [2024-10-07 09:53:47.346326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.573 [2024-10-07 09:53:47.346354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.573 qpair failed and we were unable to recover it. 00:32:52.573 [2024-10-07 09:53:47.356183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.573 [2024-10-07 09:53:47.356287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.573 [2024-10-07 09:53:47.356312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.573 [2024-10-07 09:53:47.356327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.573 [2024-10-07 09:53:47.356340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.573 [2024-10-07 09:53:47.356368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.573 qpair failed and we were unable to recover it. 00:32:52.573 [2024-10-07 09:53:47.366238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.573 [2024-10-07 09:53:47.366350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.573 [2024-10-07 09:53:47.366375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.573 [2024-10-07 09:53:47.366389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.573 [2024-10-07 09:53:47.366402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.573 [2024-10-07 09:53:47.366431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.573 qpair failed and we were unable to recover it. 00:32:52.573 [2024-10-07 09:53:47.376247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.573 [2024-10-07 09:53:47.376362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.573 [2024-10-07 09:53:47.376387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.573 [2024-10-07 09:53:47.376401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.573 [2024-10-07 09:53:47.376414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.573 [2024-10-07 09:53:47.376443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.573 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.386301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.386414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.386445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.386460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.386473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.386501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.396294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.396402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.396428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.396442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.396455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.396483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.406361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.406469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.406494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.406508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.406521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.406549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.416404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.416548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.416574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.416588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.416601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.416630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.426416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.426534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.426559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.426573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.426586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.426614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.436453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.436577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.436602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.436616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.436629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.436658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.446419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.446523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.446548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.446563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.446576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.446604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.456528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.456673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.456698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.456712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.456725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.456754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.466520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.466632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.466657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.466672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.466685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.466713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.476557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.833 [2024-10-07 09:53:47.476670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.833 [2024-10-07 09:53:47.476700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.833 [2024-10-07 09:53:47.476715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.833 [2024-10-07 09:53:47.476729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.833 [2024-10-07 09:53:47.476761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.833 qpair failed and we were unable to recover it. 00:32:52.833 [2024-10-07 09:53:47.486540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.486649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.486674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.486689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.486702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.486730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.496580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.496698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.496724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.496738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.496750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.496779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.506681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.506788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.506813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.506827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.506840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.506868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.516632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.516740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.516765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.516779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.516793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.516829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.526644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.526768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.526794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.526809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.526821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.526850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.536708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.536829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.536854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.536869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.536882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.536919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.546764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.546901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.546927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.546942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.546955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.546983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.556736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.556851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.556876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.556898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.556913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.556942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.566769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.566876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.566923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.566941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.566953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.566983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.576847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.576994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.577019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.577033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.577046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.577076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.586830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.586955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.586981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.586995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.587008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.587037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.596835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.596950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.596975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.596990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.597002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.597030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.606913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.607021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.607048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.607062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.607075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.834 [2024-10-07 09:53:47.607110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.834 qpair failed and we were unable to recover it. 00:32:52.834 [2024-10-07 09:53:47.616970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.834 [2024-10-07 09:53:47.617109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.834 [2024-10-07 09:53:47.617134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.834 [2024-10-07 09:53:47.617149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.834 [2024-10-07 09:53:47.617162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.835 [2024-10-07 09:53:47.617192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.835 qpair failed and we were unable to recover it. 00:32:52.835 [2024-10-07 09:53:47.626983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.835 [2024-10-07 09:53:47.627103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.835 [2024-10-07 09:53:47.627129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.835 [2024-10-07 09:53:47.627143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.835 [2024-10-07 09:53:47.627156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.835 [2024-10-07 09:53:47.627186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.835 qpair failed and we were unable to recover it. 00:32:52.835 [2024-10-07 09:53:47.637009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.835 [2024-10-07 09:53:47.637115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.835 [2024-10-07 09:53:47.637139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.835 [2024-10-07 09:53:47.637154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.835 [2024-10-07 09:53:47.637167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.835 [2024-10-07 09:53:47.637194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.835 qpair failed and we were unable to recover it. 00:32:52.835 [2024-10-07 09:53:47.646994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.835 [2024-10-07 09:53:47.647116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.835 [2024-10-07 09:53:47.647141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.835 [2024-10-07 09:53:47.647156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.835 [2024-10-07 09:53:47.647170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:52.835 [2024-10-07 09:53:47.647198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.835 qpair failed and we were unable to recover it. 00:32:53.093 [2024-10-07 09:53:47.657078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.093 [2024-10-07 09:53:47.657195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.093 [2024-10-07 09:53:47.657226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.093 [2024-10-07 09:53:47.657241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.093 [2024-10-07 09:53:47.657254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.093 [2024-10-07 09:53:47.657283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.093 qpair failed and we were unable to recover it. 00:32:53.093 [2024-10-07 09:53:47.667146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.093 [2024-10-07 09:53:47.667248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.093 [2024-10-07 09:53:47.667275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.093 [2024-10-07 09:53:47.667290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.093 [2024-10-07 09:53:47.667303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.093 [2024-10-07 09:53:47.667332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.093 qpair failed and we were unable to recover it. 00:32:53.093 [2024-10-07 09:53:47.677114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.093 [2024-10-07 09:53:47.677227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.093 [2024-10-07 09:53:47.677252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.093 [2024-10-07 09:53:47.677267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.093 [2024-10-07 09:53:47.677280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.093 [2024-10-07 09:53:47.677309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.093 qpair failed and we were unable to recover it. 00:32:53.093 [2024-10-07 09:53:47.687097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.093 [2024-10-07 09:53:47.687207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.093 [2024-10-07 09:53:47.687233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.093 [2024-10-07 09:53:47.687247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.093 [2024-10-07 09:53:47.687260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.093 [2024-10-07 09:53:47.687289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.093 qpair failed and we were unable to recover it. 00:32:53.093 [2024-10-07 09:53:47.697244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.093 [2024-10-07 09:53:47.697366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.093 [2024-10-07 09:53:47.697391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.093 [2024-10-07 09:53:47.697405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.093 [2024-10-07 09:53:47.697418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.093 [2024-10-07 09:53:47.697453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.093 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.707184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.707320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.707345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.707359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.707372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.707400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.717184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.717288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.717313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.717328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.717341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.717369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.727244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.727359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.727385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.727400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.727412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.727440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.737364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.737479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.737504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.737519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.737531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.737560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.747316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.747450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.747480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.747496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.747509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.747537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.757300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.757408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.757435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.757449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.757463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.757491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.767321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.767433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.767459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.767474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.767488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.767517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.777376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.777503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.777528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.777543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.777556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.777585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.787385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.787495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.787521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.787535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.787549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.787583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.797473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.797585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.797610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.797625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.797638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.797666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.807526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.807665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.807691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.807705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.807718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.807745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.817564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.817679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.817704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.817718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.817731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.817762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.827505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.827611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.827638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.827652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.827665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.827693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.837579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.837688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.837719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.837734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.837748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.837776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.847575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.847685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.847711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.847725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.847738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.847768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.857619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.857738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.857764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.857778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.857791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.857820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.867638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.867745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.867771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.867786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.867799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.867827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.877702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.877811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.877837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.877851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.877870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.877909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.887722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.887834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.887860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.887875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.887887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.887925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.897784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.897913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.897938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.897953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.897966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.897996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.094 [2024-10-07 09:53:47.907747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.094 [2024-10-07 09:53:47.907866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.094 [2024-10-07 09:53:47.907899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.094 [2024-10-07 09:53:47.907916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.094 [2024-10-07 09:53:47.907930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.094 [2024-10-07 09:53:47.907958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.094 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.917808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.917952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.917978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.917992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.918005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.918034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.927798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.927923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.927949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.927963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.927976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.928005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.937905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.938039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.938065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.938079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.938092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.938120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.947884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.948012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.948037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.948051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.948064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.948092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.957937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.958050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.958075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.958089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.958102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.958131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.967934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.968045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.968071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.968086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.968105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.968135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.977987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.978106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.978132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.978147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.978160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.978188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.988018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.988135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.988160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.988175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.988188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.988215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:47.998017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:47.998129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:47.998154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.353 [2024-10-07 09:53:47.998168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.353 [2024-10-07 09:53:47.998182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.353 [2024-10-07 09:53:47.998210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.353 qpair failed and we were unable to recover it. 00:32:53.353 [2024-10-07 09:53:48.008029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.353 [2024-10-07 09:53:48.008147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.353 [2024-10-07 09:53:48.008173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.008187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.008199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.008229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.018076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.018200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.018225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.018239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.018252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.018280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.028143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.028263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.028288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.028302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.028315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.028343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.038150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.038258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.038282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.038297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.038310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.038337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.048209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.048315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.048340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.048355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.048368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.048396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.058210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.058325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.058351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.058365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.058386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.058415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.068262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.068370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.068396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.068410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.068424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.068452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.078260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.078362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.078387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.078402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.078416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.078444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.088291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.088423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.088448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.088463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.088475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.088504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.098433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.098559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.098584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.098599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.098611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.098641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.108388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.108502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.108527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.108542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.108554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.108582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.118426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.118545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.118571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.118585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.118598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.118627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.128428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.128540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.128566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.128580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.128593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.128623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.354 [2024-10-07 09:53:48.138469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.354 [2024-10-07 09:53:48.138591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.354 [2024-10-07 09:53:48.138617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.354 [2024-10-07 09:53:48.138631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.354 [2024-10-07 09:53:48.138644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.354 [2024-10-07 09:53:48.138672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.354 qpair failed and we were unable to recover it. 00:32:53.355 [2024-10-07 09:53:48.148483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.355 [2024-10-07 09:53:48.148591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.355 [2024-10-07 09:53:48.148616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.355 [2024-10-07 09:53:48.148630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.355 [2024-10-07 09:53:48.148648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.355 [2024-10-07 09:53:48.148677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.355 qpair failed and we were unable to recover it. 00:32:53.355 [2024-10-07 09:53:48.158480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.355 [2024-10-07 09:53:48.158591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.355 [2024-10-07 09:53:48.158616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.355 [2024-10-07 09:53:48.158631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.355 [2024-10-07 09:53:48.158644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.355 [2024-10-07 09:53:48.158672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.355 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.168545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.168682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.168708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.168723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.168736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.168765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.178606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.178749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.178775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.178790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.178803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.178831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.188608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.188717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.188743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.188758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.188770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.188799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.198676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.198788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.198814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.198828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.198841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.198869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.208595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.208703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.208729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.208743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.208754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.208782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.218737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.218859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.218885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.218911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.218925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.218954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.228684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.228795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.228820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.228834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.228847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.228876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.238714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.238820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.238845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.238866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.238880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.238914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.248752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.248863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.248896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.248913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.248926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.248954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.258824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.258970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.258994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.614 [2024-10-07 09:53:48.259009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.614 [2024-10-07 09:53:48.259020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.614 [2024-10-07 09:53:48.259048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.614 qpair failed and we were unable to recover it. 00:32:53.614 [2024-10-07 09:53:48.268818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.614 [2024-10-07 09:53:48.268939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.614 [2024-10-07 09:53:48.268964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.268978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.268991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.269020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.278871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.279004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.279030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.279045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.279058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.279086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.288868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.288986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.289012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.289027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.289039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.289068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.298936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.299054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.299079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.299094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.299107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.299136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.308942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.309056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.309081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.309095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.309108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.309137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.318987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.319103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.319128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.319143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.319155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.319192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.329030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.329144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.329170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.329191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.329205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.329234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.339033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.339158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.339183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.339197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.339210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.339238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.349089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.349207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.349231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.349245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.349258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.349287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.359109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.359244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.359269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.359283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.359296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.359325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.369140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.369249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.369274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.369288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.369301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.369332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.379190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.379313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.379338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.379351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.379365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.379393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.389245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.389354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.389379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.389393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.389406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.389435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.399216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.615 [2024-10-07 09:53:48.399325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.615 [2024-10-07 09:53:48.399351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.615 [2024-10-07 09:53:48.399365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.615 [2024-10-07 09:53:48.399379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.615 [2024-10-07 09:53:48.399408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.615 qpair failed and we were unable to recover it. 00:32:53.615 [2024-10-07 09:53:48.409268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.616 [2024-10-07 09:53:48.409388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.616 [2024-10-07 09:53:48.409413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.616 [2024-10-07 09:53:48.409427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.616 [2024-10-07 09:53:48.409440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.616 [2024-10-07 09:53:48.409469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.616 qpair failed and we were unable to recover it. 00:32:53.616 [2024-10-07 09:53:48.419281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.616 [2024-10-07 09:53:48.419402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.616 [2024-10-07 09:53:48.419428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.616 [2024-10-07 09:53:48.419448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.616 [2024-10-07 09:53:48.419463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.616 [2024-10-07 09:53:48.419492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.616 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.429333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.429444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.429469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.429484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.429497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.429537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.439384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.439492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.439517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.439532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.439545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.439573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.449325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.449435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.449460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.449475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.449488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.449516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.459393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.459514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.459540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.459555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.459568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.459596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.469391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.469517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.469543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.469558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.469571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.469600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.479428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.479536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.479561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.479575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.479587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.479616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.489435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.489541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.489566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.489580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.489593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.489622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.499494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.499628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.499653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.499667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.499680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.876 [2024-10-07 09:53:48.499708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-10-07 09:53:48.509560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.876 [2024-10-07 09:53:48.509675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.876 [2024-10-07 09:53:48.509701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.876 [2024-10-07 09:53:48.509722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.876 [2024-10-07 09:53:48.509735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.877 [2024-10-07 09:53:48.509764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.519523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.519636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.519661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.519676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.519689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.877 [2024-10-07 09:53:48.519716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.529637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.529746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.529772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.529786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.529799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120f630 00:32:53.877 [2024-10-07 09:53:48.529827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.539664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.539836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.539876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.539904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.539921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e30000b90 00:32:53.877 [2024-10-07 09:53:48.539953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.549626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.549714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.549741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.549757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.549770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e30000b90 00:32:53.877 [2024-10-07 09:53:48.549801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.559647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.559742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.559776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.559795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.559809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e2c000b90 00:32:53.877 [2024-10-07 09:53:48.559842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.569666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.569751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.569779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.569794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.569808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e2c000b90 00:32:53.877 [2024-10-07 09:53:48.569838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.579744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.579898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.579950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.579969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.579984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e38000b90 00:32:53.877 [2024-10-07 09:53:48.580018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.589723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.877 [2024-10-07 09:53:48.589829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.877 [2024-10-07 09:53:48.589859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.877 [2024-10-07 09:53:48.589876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.877 [2024-10-07 09:53:48.589898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e38000b90 00:32:53.877 [2024-10-07 09:53:48.589950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-10-07 09:53:48.590044] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:53.877 A controller has encountered a failure and is being reset. 00:32:54.136 Controller properly reset. 00:32:54.136 Initializing NVMe Controllers 00:32:54.136 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:54.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:54.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:54.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:54.136 Initialization complete. Launching workers. 00:32:54.136 Starting thread on core 1 00:32:54.136 Starting thread on core 2 00:32:54.136 Starting thread on core 3 00:32:54.136 Starting thread on core 0 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:54.136 00:32:54.136 real 0m11.356s 00:32:54.136 user 0m20.286s 00:32:54.136 sys 0m5.334s 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.136 ************************************ 00:32:54.136 END TEST nvmf_target_disconnect_tc2 00:32:54.136 ************************************ 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.136 rmmod nvme_tcp 00:32:54.136 rmmod nvme_fabrics 00:32:54.136 rmmod nvme_keyring 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1667544 ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1667544 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1667544 ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1667544 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667544 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667544' 00:32:54.136 killing process with pid 1667544 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1667544 00:32:54.136 09:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1667544 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.704 09:53:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.608 09:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.608 00:32:56.608 real 0m16.990s 00:32:56.608 user 0m47.655s 00:32:56.608 sys 0m7.919s 00:32:56.608 09:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.608 09:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:56.608 ************************************ 00:32:56.608 END TEST nvmf_target_disconnect 00:32:56.608 ************************************ 00:32:56.608 09:53:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:56.608 00:32:56.608 real 6m1.902s 00:32:56.608 user 13m15.261s 00:32:56.608 sys 1m31.278s 00:32:56.608 09:53:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.609 09:53:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.609 ************************************ 00:32:56.609 END TEST nvmf_host 00:32:56.609 ************************************ 00:32:56.609 09:53:51 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:56.609 09:53:51 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:56.609 09:53:51 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:56.609 09:53:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:56.609 09:53:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.609 09:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:56.609 ************************************ 00:32:56.609 START TEST nvmf_target_core_interrupt_mode 00:32:56.609 ************************************ 00:32:56.609 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:56.867 * Looking for test storage... 00:32:56.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.867 --rc genhtml_branch_coverage=1 00:32:56.867 --rc genhtml_function_coverage=1 00:32:56.867 --rc genhtml_legend=1 00:32:56.867 --rc geninfo_all_blocks=1 00:32:56.867 --rc geninfo_unexecuted_blocks=1 00:32:56.867 00:32:56.867 ' 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.867 --rc genhtml_branch_coverage=1 00:32:56.867 --rc genhtml_function_coverage=1 00:32:56.867 --rc genhtml_legend=1 00:32:56.867 --rc geninfo_all_blocks=1 00:32:56.867 --rc geninfo_unexecuted_blocks=1 00:32:56.867 00:32:56.867 ' 00:32:56.867 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.867 --rc genhtml_branch_coverage=1 00:32:56.867 --rc genhtml_function_coverage=1 00:32:56.867 --rc genhtml_legend=1 00:32:56.867 --rc geninfo_all_blocks=1 00:32:56.868 --rc geninfo_unexecuted_blocks=1 00:32:56.868 00:32:56.868 ' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:56.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.868 --rc genhtml_branch_coverage=1 00:32:56.868 --rc genhtml_function_coverage=1 00:32:56.868 --rc genhtml_legend=1 00:32:56.868 --rc geninfo_all_blocks=1 00:32:56.868 --rc geninfo_unexecuted_blocks=1 00:32:56.868 00:32:56.868 ' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:56.868 ************************************ 00:32:56.868 START TEST nvmf_abort 00:32:56.868 ************************************ 00:32:56.868 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:57.128 * Looking for test storage... 00:32:57.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.128 --rc genhtml_branch_coverage=1 00:32:57.128 --rc genhtml_function_coverage=1 00:32:57.128 --rc genhtml_legend=1 00:32:57.128 --rc geninfo_all_blocks=1 00:32:57.128 --rc geninfo_unexecuted_blocks=1 00:32:57.128 00:32:57.128 ' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.128 --rc genhtml_branch_coverage=1 00:32:57.128 --rc genhtml_function_coverage=1 00:32:57.128 --rc genhtml_legend=1 00:32:57.128 --rc geninfo_all_blocks=1 00:32:57.128 --rc geninfo_unexecuted_blocks=1 00:32:57.128 00:32:57.128 ' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.128 --rc genhtml_branch_coverage=1 00:32:57.128 --rc genhtml_function_coverage=1 00:32:57.128 --rc genhtml_legend=1 00:32:57.128 --rc geninfo_all_blocks=1 00:32:57.128 --rc geninfo_unexecuted_blocks=1 00:32:57.128 00:32:57.128 ' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.128 --rc genhtml_branch_coverage=1 00:32:57.128 --rc genhtml_function_coverage=1 00:32:57.128 --rc genhtml_legend=1 00:32:57.128 --rc geninfo_all_blocks=1 00:32:57.128 --rc geninfo_unexecuted_blocks=1 00:32:57.128 00:32:57.128 ' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.128 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.129 09:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:59.665 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:59.665 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.665 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:59.665 Found net devices under 0000:84:00.0: cvl_0_0 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.666 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:59.925 Found net devices under 0000:84:00.1: cvl_0_1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.925 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:32:59.925 00:32:59.925 --- 10.0.0.2 ping statistics --- 00:32:59.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.925 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:32:59.926 00:32:59.926 --- 10.0.0.1 ping statistics --- 00:32:59.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.926 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1670426 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1670426 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1670426 ']' 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.926 09:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.926 [2024-10-07 09:53:54.739319] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:59.926 [2024-10-07 09:53:54.740856] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:32:59.926 [2024-10-07 09:53:54.740956] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.185 [2024-10-07 09:53:54.848908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:00.510 [2024-10-07 09:53:55.033045] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.510 [2024-10-07 09:53:55.033093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.510 [2024-10-07 09:53:55.033123] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.510 [2024-10-07 09:53:55.033135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.510 [2024-10-07 09:53:55.033145] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.510 [2024-10-07 09:53:55.034975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.510 [2024-10-07 09:53:55.035012] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.510 [2024-10-07 09:53:55.035016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.510 [2024-10-07 09:53:55.196369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.510 [2024-10-07 09:53:55.196595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:00.510 [2024-10-07 09:53:55.196613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.510 [2024-10-07 09:53:55.197037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.510 [2024-10-07 09:53:55.259789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.510 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 Malloc0 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 Delay0 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 [2024-10-07 09:53:55.324001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.799 09:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:00.799 [2024-10-07 09:53:55.402148] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:03.330 Initializing NVMe Controllers 00:33:03.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:03.330 controller IO queue size 128 less than required 00:33:03.330 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:03.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:03.330 Initialization complete. Launching workers. 00:33:03.330 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29536 00:33:03.330 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29597, failed to submit 66 00:33:03.330 success 29536, unsuccessful 61, failed 0 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.330 rmmod nvme_tcp 00:33:03.330 rmmod nvme_fabrics 00:33:03.330 rmmod nvme_keyring 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1670426 ']' 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1670426 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1670426 ']' 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1670426 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1670426 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1670426' 00:33:03.330 killing process with pid 1670426 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1670426 00:33:03.330 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1670426 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.330 09:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.867 00:33:05.867 real 0m8.480s 00:33:05.867 user 0m9.995s 00:33:05.867 sys 0m3.699s 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:05.867 ************************************ 00:33:05.867 END TEST nvmf_abort 00:33:05.867 ************************************ 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.867 ************************************ 00:33:05.867 START TEST nvmf_ns_hotplug_stress 00:33:05.867 ************************************ 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:05.867 * Looking for test storage... 00:33:05.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.867 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:05.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.868 --rc genhtml_branch_coverage=1 00:33:05.868 --rc genhtml_function_coverage=1 00:33:05.868 --rc genhtml_legend=1 00:33:05.868 --rc geninfo_all_blocks=1 00:33:05.868 --rc geninfo_unexecuted_blocks=1 00:33:05.868 00:33:05.868 ' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:05.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.868 --rc genhtml_branch_coverage=1 00:33:05.868 --rc genhtml_function_coverage=1 00:33:05.868 --rc genhtml_legend=1 00:33:05.868 --rc geninfo_all_blocks=1 00:33:05.868 --rc geninfo_unexecuted_blocks=1 00:33:05.868 00:33:05.868 ' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:05.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.868 --rc genhtml_branch_coverage=1 00:33:05.868 --rc genhtml_function_coverage=1 00:33:05.868 --rc genhtml_legend=1 00:33:05.868 --rc geninfo_all_blocks=1 00:33:05.868 --rc geninfo_unexecuted_blocks=1 00:33:05.868 00:33:05.868 ' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:05.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.868 --rc genhtml_branch_coverage=1 00:33:05.868 --rc genhtml_function_coverage=1 00:33:05.868 --rc genhtml_legend=1 00:33:05.868 --rc geninfo_all_blocks=1 00:33:05.868 --rc geninfo_unexecuted_blocks=1 00:33:05.868 00:33:05.868 ' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.868 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.869 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:08.405 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:08.405 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:08.405 Found net devices under 0000:84:00.0: cvl_0_0 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:08.405 Found net devices under 0000:84:00.1: cvl_0_1 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.405 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:33:08.406 00:33:08.406 --- 10.0.0.2 ping statistics --- 00:33:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.406 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:33:08.406 00:33:08.406 --- 10.0.0.1 ping statistics --- 00:33:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.406 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:08.406 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1672899 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1672899 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1672899 ']' 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:08.406 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:08.406 [2024-10-07 09:54:03.101016] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:08.406 [2024-10-07 09:54:03.103146] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:33:08.406 [2024-10-07 09:54:03.103266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.665 [2024-10-07 09:54:03.234967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:08.665 [2024-10-07 09:54:03.417105] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.665 [2024-10-07 09:54:03.417175] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.665 [2024-10-07 09:54:03.417191] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.665 [2024-10-07 09:54:03.417205] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.666 [2024-10-07 09:54:03.417240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.666 [2024-10-07 09:54:03.418830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:08.666 [2024-10-07 09:54:03.418907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:08.666 [2024-10-07 09:54:03.418913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.925 [2024-10-07 09:54:03.585374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:08.925 [2024-10-07 09:54:03.585627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:08.925 [2024-10-07 09:54:03.585630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:08.925 [2024-10-07 09:54:03.585945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:08.925 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:09.493 [2024-10-07 09:54:04.016055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.493 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:10.061 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.320 [2024-10-07 09:54:05.128444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.578 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.836 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:11.094 Malloc0 00:33:11.354 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:11.613 Delay0 00:33:11.613 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.871 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:12.437 NULL1 00:33:12.437 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:12.695 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1673446 00:33:12.695 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:12.695 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:12.695 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.068 Read completed with error (sct=0, sc=11) 00:33:14.068 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:14.326 09:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:14.326 09:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:14.585 true 00:33:14.585 09:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:14.585 09:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 09:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:15.776 09:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:15.776 09:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:16.034 true 00:33:16.034 09:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:16.034 09:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.598 09:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:16.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:16.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.114 09:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:17.114 09:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:17.679 true 00:33:17.679 09:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:17.679 09:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:17.937 09:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.195 09:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:18.195 09:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:18.760 true 00:33:18.760 09:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:18.760 09:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.132 09:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.391 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:20.391 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:20.956 true 00:33:20.956 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:20.956 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.523 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:21.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:21.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:22.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:22.038 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:22.038 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:22.296 true 00:33:22.296 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:22.296 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.862 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.378 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:23.378 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:23.942 true 00:33:23.942 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:23.942 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:24.200 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.766 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:24.766 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:25.025 true 00:33:25.025 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:25.025 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:26.397 09:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:26.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:26.655 09:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:26.655 09:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:27.221 true 00:33:27.221 09:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:27.221 09:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:27.479 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.044 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:28.045 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:28.610 true 00:33:28.610 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:28.610 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.543 09:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:29.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:29.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.059 09:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:30.059 09:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:30.317 true 00:33:30.317 09:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:30.317 09:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.691 09:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:31.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:32.206 09:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:32.206 09:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:32.465 true 00:33:32.465 09:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:32.465 09:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.029 09:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.543 09:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:33.543 09:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:34.107 true 00:33:34.107 09:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:34.107 09:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.927 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:34.927 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:35.184 true 00:33:35.441 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:35.441 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.007 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.264 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:36.264 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:36.830 true 00:33:36.830 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:36.830 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.088 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.655 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:37.655 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:37.913 true 00:33:37.913 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:37.913 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.544 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:39.544 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:39.803 true 00:33:39.803 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:39.803 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.410 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.966 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:40.966 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:41.223 true 00:33:41.223 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:41.223 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:41.789 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:42.304 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:42.304 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:42.561 true 00:33:42.819 09:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:42.819 09:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:43.384 Initializing NVMe Controllers 00:33:43.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:43.384 Controller IO queue size 128, less than required. 00:33:43.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:43.384 Controller IO queue size 128, less than required. 00:33:43.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:43.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:43.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:43.384 Initialization complete. Launching workers. 00:33:43.384 ======================================================== 00:33:43.384 Latency(us) 00:33:43.384 Device Information : IOPS MiB/s Average min max 00:33:43.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4220.63 2.06 20379.81 2991.75 1129386.55 00:33:43.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13567.67 6.62 9433.99 1328.62 449359.50 00:33:43.384 ======================================================== 00:33:43.384 Total : 17788.30 8.69 12031.10 1328.62 1129386.55 00:33:43.384 00:33:43.384 09:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:43.642 09:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:43.642 09:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:44.207 true 00:33:44.207 09:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1673446 00:33:44.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1673446) - No such process 00:33:44.207 09:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1673446 00:33:44.207 09:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.772 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:45.032 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:45.032 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:45.032 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:45.032 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:45.032 09:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:45.601 null0 00:33:45.601 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:45.601 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:45.601 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:45.859 null1 00:33:45.859 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:45.859 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:45.859 09:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:46.428 null2 00:33:46.428 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:46.429 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:46.429 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:46.997 null3 00:33:46.997 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:46.997 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:46.997 09:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:47.565 null4 00:33:47.565 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:47.565 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:47.565 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:48.133 null5 00:33:48.133 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.133 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.133 09:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:48.392 null6 00:33:48.392 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.393 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.393 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:48.961 null7 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.961 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1678071 1678072 1678073 1678075 1678078 1678080 1678082 1678084 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:48.962 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:49.221 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:49.221 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:49.221 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.480 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:49.738 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:49.996 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.255 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:50.513 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:50.513 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:50.513 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.772 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:51.031 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:51.289 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:51.289 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:51.289 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:51.289 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:51.289 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:51.289 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.547 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:51.805 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.806 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.806 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:51.806 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:51.806 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:51.806 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.064 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.322 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:52.581 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.839 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.097 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:53.356 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:53.356 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:53.356 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.356 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:53.356 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:53.356 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:53.356 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:53.356 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.614 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.615 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:53.872 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:54.129 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.130 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:54.387 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.387 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.387 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.387 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.646 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:54.905 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:55.163 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:55.163 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.164 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:55.164 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:55.164 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:55.164 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:55.164 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:55.421 09:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.421 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.679 rmmod nvme_tcp 00:33:55.679 rmmod nvme_fabrics 00:33:55.679 rmmod nvme_keyring 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1672899 ']' 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1672899 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1672899 ']' 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1672899 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1672899 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1672899' 00:33:55.679 killing process with pid 1672899 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1672899 00:33:55.679 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1672899 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.248 09:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.149 00:33:58.149 real 0m52.651s 00:33:58.149 user 3m33.986s 00:33:58.149 sys 0m25.433s 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:58.149 ************************************ 00:33:58.149 END TEST nvmf_ns_hotplug_stress 00:33:58.149 ************************************ 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:58.149 ************************************ 00:33:58.149 START TEST nvmf_delete_subsystem 00:33:58.149 ************************************ 00:33:58.149 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:58.149 * Looking for test storage... 00:33:58.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:58.409 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:58.409 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:33:58.409 09:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.409 --rc genhtml_branch_coverage=1 00:33:58.409 --rc genhtml_function_coverage=1 00:33:58.409 --rc genhtml_legend=1 00:33:58.409 --rc geninfo_all_blocks=1 00:33:58.409 --rc geninfo_unexecuted_blocks=1 00:33:58.409 00:33:58.409 ' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.409 --rc genhtml_branch_coverage=1 00:33:58.409 --rc genhtml_function_coverage=1 00:33:58.409 --rc genhtml_legend=1 00:33:58.409 --rc geninfo_all_blocks=1 00:33:58.409 --rc geninfo_unexecuted_blocks=1 00:33:58.409 00:33:58.409 ' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.409 --rc genhtml_branch_coverage=1 00:33:58.409 --rc genhtml_function_coverage=1 00:33:58.409 --rc genhtml_legend=1 00:33:58.409 --rc geninfo_all_blocks=1 00:33:58.409 --rc geninfo_unexecuted_blocks=1 00:33:58.409 00:33:58.409 ' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.409 --rc genhtml_branch_coverage=1 00:33:58.409 --rc genhtml_function_coverage=1 00:33:58.409 --rc genhtml_legend=1 00:33:58.409 --rc geninfo_all_blocks=1 00:33:58.409 --rc geninfo_unexecuted_blocks=1 00:33:58.409 00:33:58.409 ' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.409 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.410 09:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:01.703 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:01.703 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:01.703 Found net devices under 0000:84:00.0: cvl_0_0 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.703 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:01.704 Found net devices under 0000:84:00.1: cvl_0_1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:34:01.704 00:34:01.704 --- 10.0.0.2 ping statistics --- 00:34:01.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.704 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:34:01.704 00:34:01.704 --- 10.0.0.1 ping statistics --- 00:34:01.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.704 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:01.704 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1681092 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1681092 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1681092 ']' 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.704 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:01.704 [2024-10-07 09:54:56.077218] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:01.704 [2024-10-07 09:54:56.078688] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:01.704 [2024-10-07 09:54:56.078766] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.704 [2024-10-07 09:54:56.165017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:01.704 [2024-10-07 09:54:56.278488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.704 [2024-10-07 09:54:56.278549] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.704 [2024-10-07 09:54:56.278563] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.704 [2024-10-07 09:54:56.278575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.704 [2024-10-07 09:54:56.278585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.704 [2024-10-07 09:54:56.279420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.705 [2024-10-07 09:54:56.279426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.705 [2024-10-07 09:54:56.383122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:01.705 [2024-10-07 09:54:56.383160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:01.705 [2024-10-07 09:54:56.383467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 [2024-10-07 09:54:57.208122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 [2024-10-07 09:54:57.240372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 NULL1 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 Delay0 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1681251 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:02.640 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:02.641 [2024-10-07 09:54:57.345576] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:04.539 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.539 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.539 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 [2024-10-07 09:54:59.510920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc484000c00 is same with the state(6) to be set 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 [2024-10-07 09:54:59.512004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400d310 is same with the state(6) to be set 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 Write completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.797 starting I/O failed: -6 00:34:04.797 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Write completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 Read completed with error (sct=0, sc=8) 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:04.798 starting I/O failed: -6 00:34:05.733 [2024-10-07 09:55:00.486487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee5a70 is same with the state(6) to be set 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 [2024-10-07 09:55:00.512062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4930 is same with the state(6) to be set 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Read completed with error (sct=0, sc=8) 00:34:05.733 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 [2024-10-07 09:55:00.512369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4570 is same with the state(6) to be set 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 [2024-10-07 09:55:00.514996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400d640 is same with the state(6) to be set 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 Write completed with error (sct=0, sc=8) 00:34:05.734 Read completed with error (sct=0, sc=8) 00:34:05.734 [2024-10-07 09:55:00.515829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400cfe0 is same with the state(6) to be set 00:34:05.734 Initializing NVMe Controllers 00:34:05.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.734 Controller IO queue size 128, less than required. 00:34:05.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:05.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:05.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:05.734 Initialization complete. Launching workers. 00:34:05.734 ======================================================== 00:34:05.734 Latency(us) 00:34:05.734 Device Information : IOPS MiB/s Average min max 00:34:05.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.94 0.09 912883.59 598.36 1013805.79 00:34:05.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.78 0.07 965340.47 1111.42 1012615.60 00:34:05.734 ======================================================== 00:34:05.734 Total : 325.72 0.16 935878.38 598.36 1013805.79 00:34:05.734 00:34:05.734 [2024-10-07 09:55:00.516426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee5a70 (9): Bad file descriptor 00:34:05.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:05.734 09:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.734 09:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:05.734 09:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681251 00:34:05.734 09:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1681251 00:34:06.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1681251) - No such process 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1681251 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1681251 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1681251 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 [2024-10-07 09:55:01.036313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.302 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1681643 00:34:06.303 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:06.303 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:06.303 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:06.303 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:06.303 [2024-10-07 09:55:01.099223] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:06.869 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:06.869 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:06.869 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:07.435 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:07.435 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:07.435 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.033 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.033 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:08.033 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.317 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.317 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:08.317 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.882 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.882 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:08.882 09:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:09.447 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:09.447 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:09.447 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:09.447 Initializing NVMe Controllers 00:34:09.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.447 Controller IO queue size 128, less than required. 00:34:09.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:09.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:09.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:09.447 Initialization complete. Launching workers. 00:34:09.447 ======================================================== 00:34:09.447 Latency(us) 00:34:09.447 Device Information : IOPS MiB/s Average min max 00:34:09.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004485.22 1000221.19 1012645.83 00:34:09.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004641.18 1000290.91 1012716.46 00:34:09.447 ======================================================== 00:34:09.447 Total : 256.00 0.12 1004563.20 1000221.19 1012716.46 00:34:09.447 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1681643 00:34:10.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1681643) - No such process 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1681643 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.015 rmmod nvme_tcp 00:34:10.015 rmmod nvme_fabrics 00:34:10.015 rmmod nvme_keyring 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1681092 ']' 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1681092 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1681092 ']' 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1681092 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1681092 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1681092' 00:34:10.015 killing process with pid 1681092 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1681092 00:34:10.015 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1681092 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.274 09:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.808 00:34:12.808 real 0m14.109s 00:34:12.808 user 0m24.997s 00:34:12.808 sys 0m4.835s 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.808 ************************************ 00:34:12.808 END TEST nvmf_delete_subsystem 00:34:12.808 ************************************ 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.808 ************************************ 00:34:12.808 START TEST nvmf_host_management 00:34:12.808 ************************************ 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:12.808 * Looking for test storage... 00:34:12.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:12.808 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.809 --rc genhtml_branch_coverage=1 00:34:12.809 --rc genhtml_function_coverage=1 00:34:12.809 --rc genhtml_legend=1 00:34:12.809 --rc geninfo_all_blocks=1 00:34:12.809 --rc geninfo_unexecuted_blocks=1 00:34:12.809 00:34:12.809 ' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.809 --rc genhtml_branch_coverage=1 00:34:12.809 --rc genhtml_function_coverage=1 00:34:12.809 --rc genhtml_legend=1 00:34:12.809 --rc geninfo_all_blocks=1 00:34:12.809 --rc geninfo_unexecuted_blocks=1 00:34:12.809 00:34:12.809 ' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.809 --rc genhtml_branch_coverage=1 00:34:12.809 --rc genhtml_function_coverage=1 00:34:12.809 --rc genhtml_legend=1 00:34:12.809 --rc geninfo_all_blocks=1 00:34:12.809 --rc geninfo_unexecuted_blocks=1 00:34:12.809 00:34:12.809 ' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.809 --rc genhtml_branch_coverage=1 00:34:12.809 --rc genhtml_function_coverage=1 00:34:12.809 --rc genhtml_legend=1 00:34:12.809 --rc geninfo_all_blocks=1 00:34:12.809 --rc geninfo_unexecuted_blocks=1 00:34:12.809 00:34:12.809 ' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.809 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.810 09:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:15.341 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:15.341 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:15.341 Found net devices under 0000:84:00.0: cvl_0_0 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:15.341 Found net devices under 0000:84:00.1: cvl_0_1 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.341 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:34:15.342 00:34:15.342 --- 10.0.0.2 ping statistics --- 00:34:15.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.342 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:34:15.342 00:34:15.342 --- 10.0.0.1 ping statistics --- 00:34:15.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.342 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1684038 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1684038 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1684038 ']' 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:15.342 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.342 [2024-10-07 09:55:09.968225] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.342 [2024-10-07 09:55:09.969819] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:15.342 [2024-10-07 09:55:09.969920] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.342 [2024-10-07 09:55:10.068649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.601 [2024-10-07 09:55:10.241804] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.601 [2024-10-07 09:55:10.241885] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.601 [2024-10-07 09:55:10.241920] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.601 [2024-10-07 09:55:10.241936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.601 [2024-10-07 09:55:10.241948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.601 [2024-10-07 09:55:10.244747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.601 [2024-10-07 09:55:10.244832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.601 [2024-10-07 09:55:10.244909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:15.601 [2024-10-07 09:55:10.244914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.601 [2024-10-07 09:55:10.403058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.601 [2024-10-07 09:55:10.403285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:15.601 [2024-10-07 09:55:10.403596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:15.601 [2024-10-07 09:55:10.404459] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.601 [2024-10-07 09:55:10.404991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.859 [2024-10-07 09:55:10.541962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.859 Malloc0 00:34:15.859 [2024-10-07 09:55:10.621812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1684174 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1684174 /var/tmp/bdevperf.sock 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1684174 ']' 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:34:15.859 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:15.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:15.860 { 00:34:15.860 "params": { 00:34:15.860 "name": "Nvme$subsystem", 00:34:15.860 "trtype": "$TEST_TRANSPORT", 00:34:15.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.860 "adrfam": "ipv4", 00:34:15.860 "trsvcid": "$NVMF_PORT", 00:34:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.860 "hdgst": ${hdgst:-false}, 00:34:15.860 "ddgst": ${ddgst:-false} 00:34:15.860 }, 00:34:15.860 "method": "bdev_nvme_attach_controller" 00:34:15.860 } 00:34:15.860 EOF 00:34:15.860 )") 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:34:15.860 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:15.860 "params": { 00:34:15.860 "name": "Nvme0", 00:34:15.860 "trtype": "tcp", 00:34:15.860 "traddr": "10.0.0.2", 00:34:15.860 "adrfam": "ipv4", 00:34:15.860 "trsvcid": "4420", 00:34:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:15.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:15.860 "hdgst": false, 00:34:15.860 "ddgst": false 00:34:15.860 }, 00:34:15.860 "method": "bdev_nvme_attach_controller" 00:34:15.860 }' 00:34:16.118 [2024-10-07 09:55:10.718403] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:16.118 [2024-10-07 09:55:10.718503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684174 ] 00:34:16.118 [2024-10-07 09:55:10.789505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.118 [2024-10-07 09:55:10.901706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.684 Running I/O for 10 seconds... 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:34:16.684 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.945 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:16.945 [2024-10-07 09:55:11.645716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.645991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.945 [2024-10-07 09:55:11.646492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bde0 is same with the state(6) to be set 00:34:16.946 [2024-10-07 09:55:11.646688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.646981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.646997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.946 [2024-10-07 09:55:11.647828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.946 [2024-10-07 09:55:11.647843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.647859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.647873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.647889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.647938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.647956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.647970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.647986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.947 [2024-10-07 09:55:11.648749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d73b0 is same with the state(6) to be set 00:34:16.947 [2024-10-07 09:55:11.648839] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d73b0 was disconnected and freed. reset controller. 00:34:16.947 [2024-10-07 09:55:11.648943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.947 [2024-10-07 09:55:11.648966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.648984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.947 [2024-10-07 09:55:11.648997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.649012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.947 [2024-10-07 09:55:11.649025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.649044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.947 [2024-10-07 09:55:11.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.947 [2024-10-07 09:55:11.649072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe2c0 is same with the state(6) to be set 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.947 [2024-10-07 09:55:11.650235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:16.947 task offset: 81920 on job bdev=Nvme0n1 fails 00:34:16.947 00:34:16.947 Latency(us) 00:34:16.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:16.947 Job: Nvme0n1 ended in about 0.44 seconds with error 00:34:16.947 Verification LBA range: start 0x0 length 0x400 00:34:16.947 Nvme0n1 : 0.44 1443.22 90.20 144.32 0.00 39268.95 7281.78 34369.99 00:34:16.947 =================================================================================================================== 00:34:16.947 Total : 1443.22 90.20 144.32 0.00 39268.95 7281.78 34369.99 00:34:16.947 [2024-10-07 09:55:11.653527] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:16.947 [2024-10-07 09:55:11.653560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe2c0 (9): Bad file descriptor 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.947 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:16.948 [2024-10-07 09:55:11.698413] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1684174 00:34:17.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1684174) - No such process 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:17.880 { 00:34:17.880 "params": { 00:34:17.880 "name": "Nvme$subsystem", 00:34:17.880 "trtype": "$TEST_TRANSPORT", 00:34:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.880 "adrfam": "ipv4", 00:34:17.880 "trsvcid": "$NVMF_PORT", 00:34:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.880 "hdgst": ${hdgst:-false}, 00:34:17.880 "ddgst": ${ddgst:-false} 00:34:17.880 }, 00:34:17.880 "method": "bdev_nvme_attach_controller" 00:34:17.880 } 00:34:17.880 EOF 00:34:17.880 )") 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:34:17.880 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:17.880 "params": { 00:34:17.880 "name": "Nvme0", 00:34:17.880 "trtype": "tcp", 00:34:17.880 "traddr": "10.0.0.2", 00:34:17.880 "adrfam": "ipv4", 00:34:17.880 "trsvcid": "4420", 00:34:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:17.880 "hdgst": false, 00:34:17.880 "ddgst": false 00:34:17.880 }, 00:34:17.880 "method": "bdev_nvme_attach_controller" 00:34:17.880 }' 00:34:18.139 [2024-10-07 09:55:12.756107] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:18.139 [2024-10-07 09:55:12.756301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684443 ] 00:34:18.139 [2024-10-07 09:55:12.862826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.397 [2024-10-07 09:55:12.977825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.397 Running I/O for 1 seconds... 00:34:19.773 1536.00 IOPS, 96.00 MiB/s 00:34:19.773 Latency(us) 00:34:19.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:19.773 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:19.773 Verification LBA range: start 0x0 length 0x400 00:34:19.773 Nvme0n1 : 1.03 1553.33 97.08 0.00 0.00 40556.28 6043.88 34369.99 00:34:19.773 =================================================================================================================== 00:34:19.773 Total : 1553.33 97.08 0.00 0.00 40556.28 6043.88 34369.99 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.773 rmmod nvme_tcp 00:34:19.773 rmmod nvme_fabrics 00:34:19.773 rmmod nvme_keyring 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1684038 ']' 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1684038 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1684038 ']' 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1684038 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:19.773 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684038 00:34:20.032 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:20.032 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:20.032 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684038' 00:34:20.032 killing process with pid 1684038 00:34:20.032 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1684038 00:34:20.032 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1684038 00:34:20.290 [2024-10-07 09:55:14.945029] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:20.290 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:20.290 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.291 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:22.824 00:34:22.824 real 0m9.981s 00:34:22.824 user 0m19.542s 00:34:22.824 sys 0m4.679s 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:22.824 ************************************ 00:34:22.824 END TEST nvmf_host_management 00:34:22.824 ************************************ 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.824 ************************************ 00:34:22.824 START TEST nvmf_lvol 00:34:22.824 ************************************ 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:22.824 * Looking for test storage... 00:34:22.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:22.824 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.825 --rc genhtml_branch_coverage=1 00:34:22.825 --rc genhtml_function_coverage=1 00:34:22.825 --rc genhtml_legend=1 00:34:22.825 --rc geninfo_all_blocks=1 00:34:22.825 --rc geninfo_unexecuted_blocks=1 00:34:22.825 00:34:22.825 ' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.825 --rc genhtml_branch_coverage=1 00:34:22.825 --rc genhtml_function_coverage=1 00:34:22.825 --rc genhtml_legend=1 00:34:22.825 --rc geninfo_all_blocks=1 00:34:22.825 --rc geninfo_unexecuted_blocks=1 00:34:22.825 00:34:22.825 ' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.825 --rc genhtml_branch_coverage=1 00:34:22.825 --rc genhtml_function_coverage=1 00:34:22.825 --rc genhtml_legend=1 00:34:22.825 --rc geninfo_all_blocks=1 00:34:22.825 --rc geninfo_unexecuted_blocks=1 00:34:22.825 00:34:22.825 ' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.825 --rc genhtml_branch_coverage=1 00:34:22.825 --rc genhtml_function_coverage=1 00:34:22.825 --rc genhtml_legend=1 00:34:22.825 --rc geninfo_all_blocks=1 00:34:22.825 --rc geninfo_unexecuted_blocks=1 00:34:22.825 00:34:22.825 ' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.825 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:25.359 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.359 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.359 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.359 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:25.360 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:25.360 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:25.360 Found net devices under 0000:84:00.0: cvl_0_0 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:25.360 Found net devices under 0000:84:00.1: cvl_0_1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:34:25.360 00:34:25.360 --- 10.0.0.2 ping statistics --- 00:34:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.360 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:25.360 00:34:25.360 --- 10.0.0.1 ping statistics --- 00:34:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.360 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1686663 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1686663 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1686663 ']' 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:25.360 09:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 [2024-10-07 09:55:20.032563] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:25.361 [2024-10-07 09:55:20.033904] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:25.361 [2024-10-07 09:55:20.033987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.361 [2024-10-07 09:55:20.108990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:25.620 [2024-10-07 09:55:20.226390] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.620 [2024-10-07 09:55:20.226451] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.620 [2024-10-07 09:55:20.226465] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.620 [2024-10-07 09:55:20.226476] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.620 [2024-10-07 09:55:20.226486] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.620 [2024-10-07 09:55:20.227496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.620 [2024-10-07 09:55:20.229910] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.620 [2024-10-07 09:55:20.229922] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.620 [2024-10-07 09:55:20.335315] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:25.621 [2024-10-07 09:55:20.335558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:25.621 [2024-10-07 09:55:20.335565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:25.621 [2024-10-07 09:55:20.335867] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.621 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:26.191 [2024-10-07 09:55:20.858660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.191 09:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:26.758 09:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:26.758 09:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:27.016 09:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:27.016 09:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:27.275 09:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:27.533 09:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ad7eebfa-ac19-4ece-9592-780a843e64d0 00:34:27.533 09:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ad7eebfa-ac19-4ece-9592-780a843e64d0 lvol 20 00:34:28.467 09:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=91b739d8-29f5-4b89-bf09-4c5fd870b5df 00:34:28.467 09:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:28.467 09:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 91b739d8-29f5-4b89-bf09-4c5fd870b5df 00:34:29.059 09:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.624 [2024-10-07 09:55:24.154701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.624 09:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:30.188 09:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1687221 00:34:30.188 09:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:30.188 09:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:31.118 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 91b739d8-29f5-4b89-bf09-4c5fd870b5df MY_SNAPSHOT 00:34:31.681 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4353a182-a3e9-4ae3-9291-b062ab0e4427 00:34:31.681 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 91b739d8-29f5-4b89-bf09-4c5fd870b5df 30 00:34:31.937 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4353a182-a3e9-4ae3-9291-b062ab0e4427 MY_CLONE 00:34:32.503 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dc2dbb75-378e-4ff8-9d9a-748c0d4dbd71 00:34:32.503 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dc2dbb75-378e-4ff8-9d9a-748c0d4dbd71 00:34:33.070 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1687221 00:34:41.175 Initializing NVMe Controllers 00:34:41.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:41.175 Controller IO queue size 128, less than required. 00:34:41.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:41.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:41.175 Initialization complete. Launching workers. 00:34:41.175 ======================================================== 00:34:41.175 Latency(us) 00:34:41.175 Device Information : IOPS MiB/s Average min max 00:34:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10494.20 40.99 12202.06 825.02 83804.87 00:34:41.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10352.40 40.44 12363.17 3135.88 83810.86 00:34:41.175 ======================================================== 00:34:41.175 Total : 20846.60 81.43 12282.07 825.02 83810.86 00:34:41.175 00:34:41.175 09:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.175 09:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 91b739d8-29f5-4b89-bf09-4c5fd870b5df 00:34:41.434 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad7eebfa-ac19-4ece-9592-780a843e64d0 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.999 rmmod nvme_tcp 00:34:41.999 rmmod nvme_fabrics 00:34:41.999 rmmod nvme_keyring 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1686663 ']' 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1686663 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1686663 ']' 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1686663 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686663 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686663' 00:34:41.999 killing process with pid 1686663 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1686663 00:34:41.999 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1686663 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.257 09:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.804 00:34:44.804 real 0m21.885s 00:34:44.804 user 1m2.396s 00:34:44.804 sys 0m9.011s 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:44.804 ************************************ 00:34:44.804 END TEST nvmf_lvol 00:34:44.804 ************************************ 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:44.804 ************************************ 00:34:44.804 START TEST nvmf_lvs_grow 00:34:44.804 ************************************ 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:44.804 * Looking for test storage... 00:34:44.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:44.804 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:44.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.805 --rc genhtml_branch_coverage=1 00:34:44.805 --rc genhtml_function_coverage=1 00:34:44.805 --rc genhtml_legend=1 00:34:44.805 --rc geninfo_all_blocks=1 00:34:44.805 --rc geninfo_unexecuted_blocks=1 00:34:44.805 00:34:44.805 ' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:44.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.805 --rc genhtml_branch_coverage=1 00:34:44.805 --rc genhtml_function_coverage=1 00:34:44.805 --rc genhtml_legend=1 00:34:44.805 --rc geninfo_all_blocks=1 00:34:44.805 --rc geninfo_unexecuted_blocks=1 00:34:44.805 00:34:44.805 ' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:44.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.805 --rc genhtml_branch_coverage=1 00:34:44.805 --rc genhtml_function_coverage=1 00:34:44.805 --rc genhtml_legend=1 00:34:44.805 --rc geninfo_all_blocks=1 00:34:44.805 --rc geninfo_unexecuted_blocks=1 00:34:44.805 00:34:44.805 ' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:44.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.805 --rc genhtml_branch_coverage=1 00:34:44.805 --rc genhtml_function_coverage=1 00:34:44.805 --rc genhtml_legend=1 00:34:44.805 --rc geninfo_all_blocks=1 00:34:44.805 --rc geninfo_unexecuted_blocks=1 00:34:44.805 00:34:44.805 ' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.805 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.806 09:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:47.340 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:47.340 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:47.340 Found net devices under 0000:84:00.0: cvl_0_0 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:47.340 Found net devices under 0000:84:00.1: cvl_0_1 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.340 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.341 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:34:47.341 00:34:47.341 --- 10.0.0.2 ping statistics --- 00:34:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.341 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:34:47.341 00:34:47.341 --- 10.0.0.1 ping statistics --- 00:34:47.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.341 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1690604 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1690604 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1690604 ']' 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.341 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:47.341 [2024-10-07 09:55:42.108092] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:47.341 [2024-10-07 09:55:42.109620] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:47.341 [2024-10-07 09:55:42.109696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.600 [2024-10-07 09:55:42.187172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.600 [2024-10-07 09:55:42.293959] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.600 [2024-10-07 09:55:42.294014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.600 [2024-10-07 09:55:42.294028] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.600 [2024-10-07 09:55:42.294040] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.600 [2024-10-07 09:55:42.294049] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.600 [2024-10-07 09:55:42.294648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.600 [2024-10-07 09:55:42.383743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:47.600 [2024-10-07 09:55:42.384097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.859 09:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:48.466 [2024-10-07 09:55:42.979279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:48.466 ************************************ 00:34:48.466 START TEST lvs_grow_clean 00:34:48.466 ************************************ 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:48.466 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:48.750 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:48.750 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:49.317 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:34:49.317 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:34:49.317 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:49.886 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:49.886 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:49.886 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 lvol 150 00:34:50.455 09:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=19a377ba-8591-438b-a8da-b440d1f13339 00:34:50.455 09:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:50.455 09:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:51.023 [2024-10-07 09:55:45.531136] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:51.023 [2024-10-07 09:55:45.531263] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:51.023 true 00:34:51.023 09:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:34:51.023 09:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:51.282 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:51.283 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:51.850 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 19a377ba-8591-438b-a8da-b440d1f13339 00:34:52.418 09:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.984 [2024-10-07 09:55:47.543470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.985 09:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1691303 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1691303 /var/tmp/bdevperf.sock 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1691303 ']' 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:53.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.551 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.551 [2024-10-07 09:55:48.147417] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:34:53.551 [2024-10-07 09:55:48.147592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691303 ] 00:34:53.551 [2024-10-07 09:55:48.259386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.809 [2024-10-07 09:55:48.439501] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.809 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.809 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:34:53.809 09:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:54.375 Nvme0n1 00:34:54.375 09:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:54.941 [ 00:34:54.941 { 00:34:54.941 "name": "Nvme0n1", 00:34:54.941 "aliases": [ 00:34:54.941 "19a377ba-8591-438b-a8da-b440d1f13339" 00:34:54.941 ], 00:34:54.941 "product_name": "NVMe disk", 00:34:54.941 "block_size": 4096, 00:34:54.941 "num_blocks": 38912, 00:34:54.941 "uuid": "19a377ba-8591-438b-a8da-b440d1f13339", 00:34:54.941 "numa_id": 1, 00:34:54.941 "assigned_rate_limits": { 00:34:54.941 "rw_ios_per_sec": 0, 00:34:54.941 "rw_mbytes_per_sec": 0, 00:34:54.941 "r_mbytes_per_sec": 0, 00:34:54.941 "w_mbytes_per_sec": 0 00:34:54.941 }, 00:34:54.941 "claimed": false, 00:34:54.941 "zoned": false, 00:34:54.941 "supported_io_types": { 00:34:54.941 "read": true, 00:34:54.941 "write": true, 00:34:54.941 "unmap": true, 00:34:54.941 "flush": true, 00:34:54.941 "reset": true, 00:34:54.941 "nvme_admin": true, 00:34:54.941 "nvme_io": true, 00:34:54.941 "nvme_io_md": false, 00:34:54.941 "write_zeroes": true, 00:34:54.941 "zcopy": false, 00:34:54.941 "get_zone_info": false, 00:34:54.941 "zone_management": false, 00:34:54.941 "zone_append": false, 00:34:54.941 "compare": true, 00:34:54.941 "compare_and_write": true, 00:34:54.941 "abort": true, 00:34:54.941 "seek_hole": false, 00:34:54.941 "seek_data": false, 00:34:54.941 "copy": true, 00:34:54.941 "nvme_iov_md": false 00:34:54.941 }, 00:34:54.941 "memory_domains": [ 00:34:54.941 { 00:34:54.941 "dma_device_id": "system", 00:34:54.941 "dma_device_type": 1 00:34:54.941 } 00:34:54.941 ], 00:34:54.941 "driver_specific": { 00:34:54.941 "nvme": [ 00:34:54.941 { 00:34:54.941 "trid": { 00:34:54.941 "trtype": "TCP", 00:34:54.941 "adrfam": "IPv4", 00:34:54.941 "traddr": "10.0.0.2", 00:34:54.941 "trsvcid": "4420", 00:34:54.941 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:54.941 }, 00:34:54.941 "ctrlr_data": { 00:34:54.941 "cntlid": 1, 00:34:54.941 "vendor_id": "0x8086", 00:34:54.941 "model_number": "SPDK bdev Controller", 00:34:54.941 "serial_number": "SPDK0", 00:34:54.941 "firmware_revision": "25.01", 00:34:54.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.941 "oacs": { 00:34:54.941 "security": 0, 00:34:54.941 "format": 0, 00:34:54.941 "firmware": 0, 00:34:54.941 "ns_manage": 0 00:34:54.941 }, 00:34:54.941 "multi_ctrlr": true, 00:34:54.941 "ana_reporting": false 00:34:54.941 }, 00:34:54.941 "vs": { 00:34:54.941 "nvme_version": "1.3" 00:34:54.941 }, 00:34:54.941 "ns_data": { 00:34:54.941 "id": 1, 00:34:54.941 "can_share": true 00:34:54.941 } 00:34:54.941 } 00:34:54.941 ], 00:34:54.941 "mp_policy": "active_passive" 00:34:54.941 } 00:34:54.941 } 00:34:54.941 ] 00:34:54.941 09:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1691441 00:34:54.941 09:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:54.941 09:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:55.200 Running I/O for 10 seconds... 00:34:56.134 Latency(us) 00:34:56.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:56.134 Nvme0n1 : 1.00 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:34:56.134 =================================================================================================================== 00:34:56.134 Total : 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:34:56.134 00:34:57.069 09:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:34:57.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.069 Nvme0n1 : 2.00 14697.00 57.41 0.00 0.00 0.00 0.00 0.00 00:34:57.069 =================================================================================================================== 00:34:57.069 Total : 14697.00 57.41 0.00 0.00 0.00 0.00 0.00 00:34:57.069 00:34:57.328 true 00:34:57.328 09:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:34:57.328 09:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:57.894 09:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:57.894 09:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:57.894 09:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1691441 00:34:58.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.153 Nvme0n1 : 3.00 14529.67 56.76 0.00 0.00 0.00 0.00 0.00 00:34:58.153 =================================================================================================================== 00:34:58.153 Total : 14529.67 56.76 0.00 0.00 0.00 0.00 0.00 00:34:58.153 00:34:59.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.089 Nvme0n1 : 4.00 14482.75 56.57 0.00 0.00 0.00 0.00 0.00 00:34:59.089 =================================================================================================================== 00:34:59.089 Total : 14482.75 56.57 0.00 0.00 0.00 0.00 0.00 00:34:59.089 00:35:00.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.465 Nvme0n1 : 5.00 14474.40 56.54 0.00 0.00 0.00 0.00 0.00 00:35:00.465 =================================================================================================================== 00:35:00.465 Total : 14474.40 56.54 0.00 0.00 0.00 0.00 0.00 00:35:00.465 00:35:01.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.400 Nvme0n1 : 6.00 14419.17 56.32 0.00 0.00 0.00 0.00 0.00 00:35:01.400 =================================================================================================================== 00:35:01.400 Total : 14419.17 56.32 0.00 0.00 0.00 0.00 0.00 00:35:01.400 00:35:02.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.349 Nvme0n1 : 7.00 14425.43 56.35 0.00 0.00 0.00 0.00 0.00 00:35:02.349 =================================================================================================================== 00:35:02.349 Total : 14425.43 56.35 0.00 0.00 0.00 0.00 0.00 00:35:02.349 00:35:03.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.283 Nvme0n1 : 8.00 14438.25 56.40 0.00 0.00 0.00 0.00 0.00 00:35:03.283 =================================================================================================================== 00:35:03.283 Total : 14438.25 56.40 0.00 0.00 0.00 0.00 0.00 00:35:03.283 00:35:04.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.216 Nvme0n1 : 9.00 14448.22 56.44 0.00 0.00 0.00 0.00 0.00 00:35:04.216 =================================================================================================================== 00:35:04.216 Total : 14448.22 56.44 0.00 0.00 0.00 0.00 0.00 00:35:04.216 00:35:05.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.151 Nvme0n1 : 10.00 14460.10 56.48 0.00 0.00 0.00 0.00 0.00 00:35:05.151 =================================================================================================================== 00:35:05.151 Total : 14460.10 56.48 0.00 0.00 0.00 0.00 0.00 00:35:05.151 00:35:05.151 00:35:05.151 Latency(us) 00:35:05.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.151 Nvme0n1 : 10.01 14460.45 56.49 0.00 0.00 8847.24 4878.79 20194.80 00:35:05.151 =================================================================================================================== 00:35:05.151 Total : 14460.45 56.49 0.00 0.00 8847.24 4878.79 20194.80 00:35:05.151 { 00:35:05.151 "results": [ 00:35:05.151 { 00:35:05.151 "job": "Nvme0n1", 00:35:05.151 "core_mask": "0x2", 00:35:05.151 "workload": "randwrite", 00:35:05.151 "status": "finished", 00:35:05.152 "queue_depth": 128, 00:35:05.152 "io_size": 4096, 00:35:05.152 "runtime": 10.008611, 00:35:05.152 "iops": 14460.448108134085, 00:35:05.152 "mibps": 56.48612542239877, 00:35:05.152 "io_failed": 0, 00:35:05.152 "io_timeout": 0, 00:35:05.152 "avg_latency_us": 8847.239080631669, 00:35:05.152 "min_latency_us": 4878.791111111111, 00:35:05.152 "max_latency_us": 20194.79703703704 00:35:05.152 } 00:35:05.152 ], 00:35:05.152 "core_count": 1 00:35:05.152 } 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1691303 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1691303 ']' 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1691303 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1691303 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1691303' 00:35:05.152 killing process with pid 1691303 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1691303 00:35:05.152 Received shutdown signal, test time was about 10.000000 seconds 00:35:05.152 00:35:05.152 Latency(us) 00:35:05.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.152 =================================================================================================================== 00:35:05.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.152 09:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1691303 00:35:05.720 09:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:06.287 09:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.855 09:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:06.855 09:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:07.112 09:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:07.112 09:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:07.112 09:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:07.370 [2024-10-07 09:56:02.179187] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:07.629 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:07.888 request: 00:35:07.888 { 00:35:07.888 "uuid": "47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389", 00:35:07.888 "method": "bdev_lvol_get_lvstores", 00:35:07.888 "req_id": 1 00:35:07.888 } 00:35:07.888 Got JSON-RPC error response 00:35:07.888 response: 00:35:07.888 { 00:35:07.888 "code": -19, 00:35:07.888 "message": "No such device" 00:35:07.888 } 00:35:08.146 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:35:08.146 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:08.146 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:08.146 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:08.146 09:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:08.405 aio_bdev 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 19a377ba-8591-438b-a8da-b440d1f13339 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=19a377ba-8591-438b-a8da-b440d1f13339 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:08.663 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:09.229 09:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 19a377ba-8591-438b-a8da-b440d1f13339 -t 2000 00:35:09.487 [ 00:35:09.487 { 00:35:09.487 "name": "19a377ba-8591-438b-a8da-b440d1f13339", 00:35:09.487 "aliases": [ 00:35:09.487 "lvs/lvol" 00:35:09.487 ], 00:35:09.487 "product_name": "Logical Volume", 00:35:09.487 "block_size": 4096, 00:35:09.487 "num_blocks": 38912, 00:35:09.487 "uuid": "19a377ba-8591-438b-a8da-b440d1f13339", 00:35:09.487 "assigned_rate_limits": { 00:35:09.487 "rw_ios_per_sec": 0, 00:35:09.487 "rw_mbytes_per_sec": 0, 00:35:09.487 "r_mbytes_per_sec": 0, 00:35:09.487 "w_mbytes_per_sec": 0 00:35:09.487 }, 00:35:09.487 "claimed": false, 00:35:09.487 "zoned": false, 00:35:09.487 "supported_io_types": { 00:35:09.487 "read": true, 00:35:09.487 "write": true, 00:35:09.487 "unmap": true, 00:35:09.487 "flush": false, 00:35:09.487 "reset": true, 00:35:09.487 "nvme_admin": false, 00:35:09.487 "nvme_io": false, 00:35:09.487 "nvme_io_md": false, 00:35:09.487 "write_zeroes": true, 00:35:09.487 "zcopy": false, 00:35:09.487 "get_zone_info": false, 00:35:09.487 "zone_management": false, 00:35:09.487 "zone_append": false, 00:35:09.487 "compare": false, 00:35:09.487 "compare_and_write": false, 00:35:09.487 "abort": false, 00:35:09.487 "seek_hole": true, 00:35:09.487 "seek_data": true, 00:35:09.487 "copy": false, 00:35:09.487 "nvme_iov_md": false 00:35:09.487 }, 00:35:09.487 "driver_specific": { 00:35:09.487 "lvol": { 00:35:09.487 "lvol_store_uuid": "47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389", 00:35:09.487 "base_bdev": "aio_bdev", 00:35:09.487 "thin_provision": false, 00:35:09.487 "num_allocated_clusters": 38, 00:35:09.487 "snapshot": false, 00:35:09.487 "clone": false, 00:35:09.487 "esnap_clone": false 00:35:09.487 } 00:35:09.487 } 00:35:09.487 } 00:35:09.487 ] 00:35:09.487 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:35:09.487 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:09.487 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:10.054 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:10.054 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:10.054 09:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:10.313 09:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:10.313 09:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19a377ba-8591-438b-a8da-b440d1f13339 00:35:10.880 09:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47a0e5fc-f548-4ac0-9d3c-ef9cc0d7b389 00:35:11.447 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:12.014 00:35:12.014 real 0m23.679s 00:35:12.014 user 0m23.445s 00:35:12.014 sys 0m2.579s 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:12.014 ************************************ 00:35:12.014 END TEST lvs_grow_clean 00:35:12.014 ************************************ 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.014 ************************************ 00:35:12.014 START TEST lvs_grow_dirty 00:35:12.014 ************************************ 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:12.014 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:12.272 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:12.272 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:12.838 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:12.839 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:12.839 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:13.406 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:13.406 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:13.406 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 lvol 150 00:35:13.972 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:13.972 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:13.972 09:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:14.538 [2024-10-07 09:56:09.111128] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:14.538 [2024-10-07 09:56:09.111257] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:14.538 true 00:35:14.538 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:14.538 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:14.797 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:14.797 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:15.362 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:15.930 09:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.497 [2024-10-07 09:56:11.115484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.497 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1693986 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1693986 /var/tmp/bdevperf.sock 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1693986 ']' 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:17.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:17.066 09:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:17.066 [2024-10-07 09:56:11.714011] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:17.066 [2024-10-07 09:56:11.714101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693986 ] 00:35:17.066 [2024-10-07 09:56:11.825087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.325 [2024-10-07 09:56:12.003005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.584 09:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.584 09:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:35:17.584 09:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:18.150 Nvme0n1 00:35:18.150 09:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:18.409 [ 00:35:18.409 { 00:35:18.409 "name": "Nvme0n1", 00:35:18.409 "aliases": [ 00:35:18.409 "26e050bb-cff0-4f57-bb59-3f5e39d6d122" 00:35:18.409 ], 00:35:18.409 "product_name": "NVMe disk", 00:35:18.409 "block_size": 4096, 00:35:18.409 "num_blocks": 38912, 00:35:18.409 "uuid": "26e050bb-cff0-4f57-bb59-3f5e39d6d122", 00:35:18.409 "numa_id": 1, 00:35:18.409 "assigned_rate_limits": { 00:35:18.409 "rw_ios_per_sec": 0, 00:35:18.409 "rw_mbytes_per_sec": 0, 00:35:18.409 "r_mbytes_per_sec": 0, 00:35:18.409 "w_mbytes_per_sec": 0 00:35:18.409 }, 00:35:18.409 "claimed": false, 00:35:18.409 "zoned": false, 00:35:18.409 "supported_io_types": { 00:35:18.409 "read": true, 00:35:18.409 "write": true, 00:35:18.409 "unmap": true, 00:35:18.409 "flush": true, 00:35:18.409 "reset": true, 00:35:18.409 "nvme_admin": true, 00:35:18.409 "nvme_io": true, 00:35:18.409 "nvme_io_md": false, 00:35:18.409 "write_zeroes": true, 00:35:18.409 "zcopy": false, 00:35:18.409 "get_zone_info": false, 00:35:18.409 "zone_management": false, 00:35:18.409 "zone_append": false, 00:35:18.409 "compare": true, 00:35:18.409 "compare_and_write": true, 00:35:18.409 "abort": true, 00:35:18.409 "seek_hole": false, 00:35:18.409 "seek_data": false, 00:35:18.409 "copy": true, 00:35:18.409 "nvme_iov_md": false 00:35:18.409 }, 00:35:18.409 "memory_domains": [ 00:35:18.409 { 00:35:18.409 "dma_device_id": "system", 00:35:18.409 "dma_device_type": 1 00:35:18.409 } 00:35:18.409 ], 00:35:18.409 "driver_specific": { 00:35:18.409 "nvme": [ 00:35:18.409 { 00:35:18.409 "trid": { 00:35:18.409 "trtype": "TCP", 00:35:18.409 "adrfam": "IPv4", 00:35:18.409 "traddr": "10.0.0.2", 00:35:18.409 "trsvcid": "4420", 00:35:18.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:18.409 }, 00:35:18.409 "ctrlr_data": { 00:35:18.409 "cntlid": 1, 00:35:18.409 "vendor_id": "0x8086", 00:35:18.409 "model_number": "SPDK bdev Controller", 00:35:18.409 "serial_number": "SPDK0", 00:35:18.409 "firmware_revision": "25.01", 00:35:18.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.409 "oacs": { 00:35:18.409 "security": 0, 00:35:18.409 "format": 0, 00:35:18.409 "firmware": 0, 00:35:18.409 "ns_manage": 0 00:35:18.409 }, 00:35:18.409 "multi_ctrlr": true, 00:35:18.409 "ana_reporting": false 00:35:18.409 }, 00:35:18.409 "vs": { 00:35:18.409 "nvme_version": "1.3" 00:35:18.409 }, 00:35:18.409 "ns_data": { 00:35:18.409 "id": 1, 00:35:18.409 "can_share": true 00:35:18.409 } 00:35:18.409 } 00:35:18.409 ], 00:35:18.409 "mp_policy": "active_passive" 00:35:18.409 } 00:35:18.409 } 00:35:18.409 ] 00:35:18.409 09:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1694118 00:35:18.409 09:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:18.409 09:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:18.739 Running I/O for 10 seconds... 00:35:19.757 Latency(us) 00:35:19.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.757 Nvme0n1 : 1.00 13820.00 53.98 0.00 0.00 0.00 0.00 0.00 00:35:19.757 =================================================================================================================== 00:35:19.757 Total : 13820.00 53.98 0.00 0.00 0.00 0.00 0.00 00:35:19.757 00:35:20.325 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:20.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.583 Nvme0n1 : 2.00 13917.50 54.37 0.00 0.00 0.00 0.00 0.00 00:35:20.583 =================================================================================================================== 00:35:20.583 Total : 13917.50 54.37 0.00 0.00 0.00 0.00 0.00 00:35:20.583 00:35:20.842 true 00:35:20.842 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:20.842 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:21.100 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:21.100 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:21.100 09:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1694118 00:35:21.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.667 Nvme0n1 : 3.00 14032.67 54.82 0.00 0.00 0.00 0.00 0.00 00:35:21.667 =================================================================================================================== 00:35:21.667 Total : 14032.67 54.82 0.00 0.00 0.00 0.00 0.00 00:35:21.667 00:35:22.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.602 Nvme0n1 : 4.00 14153.00 55.29 0.00 0.00 0.00 0.00 0.00 00:35:22.602 =================================================================================================================== 00:35:22.602 Total : 14153.00 55.29 0.00 0.00 0.00 0.00 0.00 00:35:22.602 00:35:23.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.537 Nvme0n1 : 5.00 14225.80 55.57 0.00 0.00 0.00 0.00 0.00 00:35:23.537 =================================================================================================================== 00:35:23.537 Total : 14225.80 55.57 0.00 0.00 0.00 0.00 0.00 00:35:23.537 00:35:24.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:24.909 Nvme0n1 : 6.00 14273.33 55.76 0.00 0.00 0.00 0.00 0.00 00:35:24.909 =================================================================================================================== 00:35:24.909 Total : 14273.33 55.76 0.00 0.00 0.00 0.00 0.00 00:35:24.909 00:35:25.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.844 Nvme0n1 : 7.00 14307.29 55.89 0.00 0.00 0.00 0.00 0.00 00:35:25.844 =================================================================================================================== 00:35:25.844 Total : 14307.29 55.89 0.00 0.00 0.00 0.00 0.00 00:35:25.844 00:35:26.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:26.779 Nvme0n1 : 8.00 14315.75 55.92 0.00 0.00 0.00 0.00 0.00 00:35:26.779 =================================================================================================================== 00:35:26.779 Total : 14315.75 55.92 0.00 0.00 0.00 0.00 0.00 00:35:26.779 00:35:27.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:27.715 Nvme0n1 : 9.00 14424.56 56.35 0.00 0.00 0.00 0.00 0.00 00:35:27.715 =================================================================================================================== 00:35:27.715 Total : 14424.56 56.35 0.00 0.00 0.00 0.00 0.00 00:35:27.715 00:35:28.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.648 Nvme0n1 : 10.00 14548.50 56.83 0.00 0.00 0.00 0.00 0.00 00:35:28.648 =================================================================================================================== 00:35:28.648 Total : 14548.50 56.83 0.00 0.00 0.00 0.00 0.00 00:35:28.648 00:35:28.649 00:35:28.649 Latency(us) 00:35:28.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.649 Nvme0n1 : 10.01 14545.48 56.82 0.00 0.00 8794.45 4636.07 20194.80 00:35:28.649 =================================================================================================================== 00:35:28.649 Total : 14545.48 56.82 0.00 0.00 8794.45 4636.07 20194.80 00:35:28.649 { 00:35:28.649 "results": [ 00:35:28.649 { 00:35:28.649 "job": "Nvme0n1", 00:35:28.649 "core_mask": "0x2", 00:35:28.649 "workload": "randwrite", 00:35:28.649 "status": "finished", 00:35:28.649 "queue_depth": 128, 00:35:28.649 "io_size": 4096, 00:35:28.649 "runtime": 10.006475, 00:35:28.649 "iops": 14545.481800534155, 00:35:28.649 "mibps": 56.81828828333654, 00:35:28.649 "io_failed": 0, 00:35:28.649 "io_timeout": 0, 00:35:28.649 "avg_latency_us": 8794.454223185116, 00:35:28.649 "min_latency_us": 4636.065185185185, 00:35:28.649 "max_latency_us": 20194.79703703704 00:35:28.649 } 00:35:28.649 ], 00:35:28.649 "core_count": 1 00:35:28.649 } 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1693986 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1693986 ']' 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1693986 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693986 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693986' 00:35:28.649 killing process with pid 1693986 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1693986 00:35:28.649 Received shutdown signal, test time was about 10.000000 seconds 00:35:28.649 00:35:28.649 Latency(us) 00:35:28.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.649 =================================================================================================================== 00:35:28.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.649 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1693986 00:35:29.216 09:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:29.475 09:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.733 09:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:29.733 09:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1690604 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1690604 00:35:30.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1690604 Killed "${NVMF_APP[@]}" "$@" 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:30.299 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1695444 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1695444 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1695444 ']' 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.557 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:30.557 [2024-10-07 09:56:25.185142] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:30.557 [2024-10-07 09:56:25.187207] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:30.557 [2024-10-07 09:56:25.187301] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.557 [2024-10-07 09:56:25.293922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.816 [2024-10-07 09:56:25.414322] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.816 [2024-10-07 09:56:25.414392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.816 [2024-10-07 09:56:25.414409] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.816 [2024-10-07 09:56:25.414421] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.816 [2024-10-07 09:56:25.414433] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.816 [2024-10-07 09:56:25.415153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.816 [2024-10-07 09:56:25.520400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:30.816 [2024-10-07 09:56:25.520730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.075 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:31.642 [2024-10-07 09:56:26.210436] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:31.642 [2024-10-07 09:56:26.210598] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:31.642 [2024-10-07 09:56:26.210657] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:31.642 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:31.901 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26e050bb-cff0-4f57-bb59-3f5e39d6d122 -t 2000 00:35:32.470 [ 00:35:32.470 { 00:35:32.470 "name": "26e050bb-cff0-4f57-bb59-3f5e39d6d122", 00:35:32.470 "aliases": [ 00:35:32.470 "lvs/lvol" 00:35:32.470 ], 00:35:32.470 "product_name": "Logical Volume", 00:35:32.470 "block_size": 4096, 00:35:32.470 "num_blocks": 38912, 00:35:32.470 "uuid": "26e050bb-cff0-4f57-bb59-3f5e39d6d122", 00:35:32.470 "assigned_rate_limits": { 00:35:32.470 "rw_ios_per_sec": 0, 00:35:32.470 "rw_mbytes_per_sec": 0, 00:35:32.470 "r_mbytes_per_sec": 0, 00:35:32.470 "w_mbytes_per_sec": 0 00:35:32.470 }, 00:35:32.470 "claimed": false, 00:35:32.470 "zoned": false, 00:35:32.470 "supported_io_types": { 00:35:32.470 "read": true, 00:35:32.470 "write": true, 00:35:32.470 "unmap": true, 00:35:32.470 "flush": false, 00:35:32.470 "reset": true, 00:35:32.470 "nvme_admin": false, 00:35:32.470 "nvme_io": false, 00:35:32.470 "nvme_io_md": false, 00:35:32.470 "write_zeroes": true, 00:35:32.470 "zcopy": false, 00:35:32.470 "get_zone_info": false, 00:35:32.470 "zone_management": false, 00:35:32.470 "zone_append": false, 00:35:32.470 "compare": false, 00:35:32.470 "compare_and_write": false, 00:35:32.470 "abort": false, 00:35:32.470 "seek_hole": true, 00:35:32.470 "seek_data": true, 00:35:32.470 "copy": false, 00:35:32.470 "nvme_iov_md": false 00:35:32.470 }, 00:35:32.470 "driver_specific": { 00:35:32.470 "lvol": { 00:35:32.470 "lvol_store_uuid": "3f5fb4fc-229c-4b2d-af46-d6c13771e8f1", 00:35:32.470 "base_bdev": "aio_bdev", 00:35:32.470 "thin_provision": false, 00:35:32.470 "num_allocated_clusters": 38, 00:35:32.470 "snapshot": false, 00:35:32.470 "clone": false, 00:35:32.470 "esnap_clone": false 00:35:32.470 } 00:35:32.470 } 00:35:32.470 } 00:35:32.470 ] 00:35:32.470 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:35:32.470 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:32.470 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:33.039 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:33.039 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:33.039 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:33.607 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:33.607 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:34.175 [2024-10-07 09:56:28.795768] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:34.175 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:34.745 request: 00:35:34.745 { 00:35:34.745 "uuid": "3f5fb4fc-229c-4b2d-af46-d6c13771e8f1", 00:35:34.745 "method": "bdev_lvol_get_lvstores", 00:35:34.745 "req_id": 1 00:35:34.745 } 00:35:34.745 Got JSON-RPC error response 00:35:34.745 response: 00:35:34.745 { 00:35:34.745 "code": -19, 00:35:34.745 "message": "No such device" 00:35:34.745 } 00:35:34.745 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:35:34.745 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:34.745 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:34.745 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:34.745 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:35.311 aio_bdev 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:35.312 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:35.877 09:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26e050bb-cff0-4f57-bb59-3f5e39d6d122 -t 2000 00:35:36.135 [ 00:35:36.135 { 00:35:36.135 "name": "26e050bb-cff0-4f57-bb59-3f5e39d6d122", 00:35:36.135 "aliases": [ 00:35:36.135 "lvs/lvol" 00:35:36.135 ], 00:35:36.135 "product_name": "Logical Volume", 00:35:36.135 "block_size": 4096, 00:35:36.135 "num_blocks": 38912, 00:35:36.135 "uuid": "26e050bb-cff0-4f57-bb59-3f5e39d6d122", 00:35:36.135 "assigned_rate_limits": { 00:35:36.135 "rw_ios_per_sec": 0, 00:35:36.135 "rw_mbytes_per_sec": 0, 00:35:36.135 "r_mbytes_per_sec": 0, 00:35:36.135 "w_mbytes_per_sec": 0 00:35:36.135 }, 00:35:36.135 "claimed": false, 00:35:36.135 "zoned": false, 00:35:36.135 "supported_io_types": { 00:35:36.135 "read": true, 00:35:36.135 "write": true, 00:35:36.135 "unmap": true, 00:35:36.135 "flush": false, 00:35:36.135 "reset": true, 00:35:36.135 "nvme_admin": false, 00:35:36.135 "nvme_io": false, 00:35:36.135 "nvme_io_md": false, 00:35:36.135 "write_zeroes": true, 00:35:36.135 "zcopy": false, 00:35:36.135 "get_zone_info": false, 00:35:36.135 "zone_management": false, 00:35:36.135 "zone_append": false, 00:35:36.135 "compare": false, 00:35:36.135 "compare_and_write": false, 00:35:36.135 "abort": false, 00:35:36.135 "seek_hole": true, 00:35:36.135 "seek_data": true, 00:35:36.135 "copy": false, 00:35:36.135 "nvme_iov_md": false 00:35:36.135 }, 00:35:36.135 "driver_specific": { 00:35:36.135 "lvol": { 00:35:36.135 "lvol_store_uuid": "3f5fb4fc-229c-4b2d-af46-d6c13771e8f1", 00:35:36.135 "base_bdev": "aio_bdev", 00:35:36.135 "thin_provision": false, 00:35:36.135 "num_allocated_clusters": 38, 00:35:36.135 "snapshot": false, 00:35:36.135 "clone": false, 00:35:36.135 "esnap_clone": false 00:35:36.135 } 00:35:36.135 } 00:35:36.135 } 00:35:36.135 ] 00:35:36.135 09:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:35:36.135 09:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:36.135 09:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:36.701 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:36.701 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:36.701 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:37.266 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:37.266 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26e050bb-cff0-4f57-bb59-3f5e39d6d122 00:35:37.832 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f5fb4fc-229c-4b2d-af46-d6c13771e8f1 00:35:38.089 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:38.655 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:38.914 00:35:38.914 real 0m26.698s 00:35:38.914 user 0m43.149s 00:35:38.914 sys 0m5.960s 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:38.914 ************************************ 00:35:38.914 END TEST lvs_grow_dirty 00:35:38.914 ************************************ 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:38.914 nvmf_trace.0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.914 rmmod nvme_tcp 00:35:38.914 rmmod nvme_fabrics 00:35:38.914 rmmod nvme_keyring 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1695444 ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1695444 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1695444 ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1695444 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1695444 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1695444' 00:35:38.914 killing process with pid 1695444 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1695444 00:35:38.914 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1695444 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:39.482 09:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:35:39.482 09:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.482 09:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.482 09:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.482 09:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.482 09:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:41.383 00:35:41.383 real 0m56.982s 00:35:41.383 user 1m9.018s 00:35:41.383 sys 0m11.178s 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:41.383 ************************************ 00:35:41.383 END TEST nvmf_lvs_grow 00:35:41.383 ************************************ 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:41.383 ************************************ 00:35:41.383 START TEST nvmf_bdev_io_wait 00:35:41.383 ************************************ 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:41.383 * Looking for test storage... 00:35:41.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:41.383 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:41.384 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:35:41.384 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.642 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.643 --rc genhtml_branch_coverage=1 00:35:41.643 --rc genhtml_function_coverage=1 00:35:41.643 --rc genhtml_legend=1 00:35:41.643 --rc geninfo_all_blocks=1 00:35:41.643 --rc geninfo_unexecuted_blocks=1 00:35:41.643 00:35:41.643 ' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.643 --rc genhtml_branch_coverage=1 00:35:41.643 --rc genhtml_function_coverage=1 00:35:41.643 --rc genhtml_legend=1 00:35:41.643 --rc geninfo_all_blocks=1 00:35:41.643 --rc geninfo_unexecuted_blocks=1 00:35:41.643 00:35:41.643 ' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.643 --rc genhtml_branch_coverage=1 00:35:41.643 --rc genhtml_function_coverage=1 00:35:41.643 --rc genhtml_legend=1 00:35:41.643 --rc geninfo_all_blocks=1 00:35:41.643 --rc geninfo_unexecuted_blocks=1 00:35:41.643 00:35:41.643 ' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.643 --rc genhtml_branch_coverage=1 00:35:41.643 --rc genhtml_function_coverage=1 00:35:41.643 --rc genhtml_legend=1 00:35:41.643 --rc geninfo_all_blocks=1 00:35:41.643 --rc geninfo_unexecuted_blocks=1 00:35:41.643 00:35:41.643 ' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.643 09:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:44.177 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:44.177 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:44.177 Found net devices under 0000:84:00.0: cvl_0_0 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:44.177 Found net devices under 0000:84:00.1: cvl_0_1 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:35:44.177 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:35:44.178 00:35:44.178 --- 10.0.0.2 ping statistics --- 00:35:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.178 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:35:44.178 00:35:44.178 --- 10.0.0.1 ping statistics --- 00:35:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.178 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1698490 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1698490 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1698490 ']' 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:44.178 09:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:44.438 [2024-10-07 09:56:39.024988] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:44.438 [2024-10-07 09:56:39.026543] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:44.438 [2024-10-07 09:56:39.026624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.438 [2024-10-07 09:56:39.109744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.438 [2024-10-07 09:56:39.235156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.438 [2024-10-07 09:56:39.235229] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.438 [2024-10-07 09:56:39.235246] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.438 [2024-10-07 09:56:39.235260] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.438 [2024-10-07 09:56:39.235271] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.438 [2024-10-07 09:56:39.237189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.438 [2024-10-07 09:56:39.237248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.438 [2024-10-07 09:56:39.237366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.438 [2024-10-07 09:56:39.237369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.438 [2024-10-07 09:56:39.237865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 [2024-10-07 09:56:39.675047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.006 [2024-10-07 09:56:39.675240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.006 [2024-10-07 09:56:39.676135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.006 [2024-10-07 09:56:39.677043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 [2024-10-07 09:56:39.682115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 Malloc0 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:45.007 [2024-10-07 09:56:39.762299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1698557 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1698559 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1698562 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:45.007 { 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme$subsystem", 00:35:45.007 "trtype": "$TEST_TRANSPORT", 00:35:45.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "$NVMF_PORT", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.007 "hdgst": ${hdgst:-false}, 00:35:45.007 "ddgst": ${ddgst:-false} 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 } 00:35:45.007 EOF 00:35:45.007 )") 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:45.007 { 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme$subsystem", 00:35:45.007 "trtype": "$TEST_TRANSPORT", 00:35:45.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "$NVMF_PORT", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.007 "hdgst": ${hdgst:-false}, 00:35:45.007 "ddgst": ${ddgst:-false} 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 } 00:35:45.007 EOF 00:35:45.007 )") 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1698565 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:45.007 { 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme$subsystem", 00:35:45.007 "trtype": "$TEST_TRANSPORT", 00:35:45.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "$NVMF_PORT", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.007 "hdgst": ${hdgst:-false}, 00:35:45.007 "ddgst": ${ddgst:-false} 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 } 00:35:45.007 EOF 00:35:45.007 )") 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:45.007 { 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme$subsystem", 00:35:45.007 "trtype": "$TEST_TRANSPORT", 00:35:45.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "$NVMF_PORT", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.007 "hdgst": ${hdgst:-false}, 00:35:45.007 "ddgst": ${ddgst:-false} 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 } 00:35:45.007 EOF 00:35:45.007 )") 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1698557 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme1", 00:35:45.007 "trtype": "tcp", 00:35:45.007 "traddr": "10.0.0.2", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "4420", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.007 "hdgst": false, 00:35:45.007 "ddgst": false 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 }' 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme1", 00:35:45.007 "trtype": "tcp", 00:35:45.007 "traddr": "10.0.0.2", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "4420", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.007 "hdgst": false, 00:35:45.007 "ddgst": false 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 }' 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme1", 00:35:45.007 "trtype": "tcp", 00:35:45.007 "traddr": "10.0.0.2", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "4420", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.007 "hdgst": false, 00:35:45.007 "ddgst": false 00:35:45.007 }, 00:35:45.007 "method": "bdev_nvme_attach_controller" 00:35:45.007 }' 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:35:45.007 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:45.007 "params": { 00:35:45.007 "name": "Nvme1", 00:35:45.007 "trtype": "tcp", 00:35:45.007 "traddr": "10.0.0.2", 00:35:45.007 "adrfam": "ipv4", 00:35:45.007 "trsvcid": "4420", 00:35:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.008 "hdgst": false, 00:35:45.008 "ddgst": false 00:35:45.008 }, 00:35:45.008 "method": "bdev_nvme_attach_controller" 00:35:45.008 }' 00:35:45.008 [2024-10-07 09:56:39.814741] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:45.008 [2024-10-07 09:56:39.814742] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:45.008 [2024-10-07 09:56:39.814822] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:45.008 [2024-10-07 09:56:39.814825] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:45.266 [2024-10-07 09:56:39.827233] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:45.266 [2024-10-07 09:56:39.827231] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:45.266 [2024-10-07 09:56:39.827331] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:56:39.827332] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:45.266 --proc-type=auto ] 00:35:45.266 [2024-10-07 09:56:39.960521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.266 [2024-10-07 09:56:40.037515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.266 [2024-10-07 09:56:40.059158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:35:45.525 [2024-10-07 09:56:40.131502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:35:45.525 [2024-10-07 09:56:40.169498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.525 [2024-10-07 09:56:40.271124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:35:45.525 [2024-10-07 09:56:40.290386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.783 [2024-10-07 09:56:40.394559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:35:45.783 Running I/O for 1 seconds... 00:35:46.041 Running I/O for 1 seconds... 00:35:46.041 Running I/O for 1 seconds... 00:35:46.299 Running I/O for 1 seconds... 00:35:46.865 9901.00 IOPS, 38.68 MiB/s 00:35:46.865 Latency(us) 00:35:46.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.865 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:46.865 Nvme1n1 : 1.01 9941.96 38.84 0.00 0.00 12815.31 4538.97 14854.83 00:35:46.865 =================================================================================================================== 00:35:46.865 Total : 9941.96 38.84 0.00 0.00 12815.31 4538.97 14854.83 00:35:47.124 7994.00 IOPS, 31.23 MiB/s 00:35:47.124 Latency(us) 00:35:47.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.124 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:47.124 Nvme1n1 : 1.01 8055.09 31.47 0.00 0.00 15814.86 2475.80 20874.43 00:35:47.124 =================================================================================================================== 00:35:47.124 Total : 8055.09 31.47 0.00 0.00 15814.86 2475.80 20874.43 00:35:47.124 9025.00 IOPS, 35.25 MiB/s 00:35:47.124 Latency(us) 00:35:47.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.124 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:47.124 Nvme1n1 : 1.01 9102.33 35.56 0.00 0.00 14010.11 5000.15 21068.61 00:35:47.124 =================================================================================================================== 00:35:47.124 Total : 9102.33 35.56 0.00 0.00 14010.11 5000.15 21068.61 00:35:47.124 199088.00 IOPS, 777.69 MiB/s 00:35:47.124 Latency(us) 00:35:47.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.124 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:47.124 Nvme1n1 : 1.00 198708.32 776.20 0.00 0.00 640.76 306.44 1868.99 00:35:47.124 =================================================================================================================== 00:35:47.124 Total : 198708.32 776.20 0.00 0.00 640.76 306.44 1868.99 00:35:47.381 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1698559 00:35:47.381 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1698562 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1698565 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.639 rmmod nvme_tcp 00:35:47.639 rmmod nvme_fabrics 00:35:47.639 rmmod nvme_keyring 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1698490 ']' 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1698490 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1698490 ']' 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1698490 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698490 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698490' 00:35:47.639 killing process with pid 1698490 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1698490 00:35:47.639 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1698490 00:35:47.897 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:47.897 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.898 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.432 00:35:50.432 real 0m8.516s 00:35:50.432 user 0m16.761s 00:35:50.432 sys 0m5.147s 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:50.432 ************************************ 00:35:50.432 END TEST nvmf_bdev_io_wait 00:35:50.432 ************************************ 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:50.432 ************************************ 00:35:50.432 START TEST nvmf_queue_depth 00:35:50.432 ************************************ 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:50.432 * Looking for test storage... 00:35:50.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:50.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.432 --rc genhtml_branch_coverage=1 00:35:50.432 --rc genhtml_function_coverage=1 00:35:50.432 --rc genhtml_legend=1 00:35:50.432 --rc geninfo_all_blocks=1 00:35:50.432 --rc geninfo_unexecuted_blocks=1 00:35:50.432 00:35:50.432 ' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:50.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.432 --rc genhtml_branch_coverage=1 00:35:50.432 --rc genhtml_function_coverage=1 00:35:50.432 --rc genhtml_legend=1 00:35:50.432 --rc geninfo_all_blocks=1 00:35:50.432 --rc geninfo_unexecuted_blocks=1 00:35:50.432 00:35:50.432 ' 00:35:50.432 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:50.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.432 --rc genhtml_branch_coverage=1 00:35:50.432 --rc genhtml_function_coverage=1 00:35:50.432 --rc genhtml_legend=1 00:35:50.432 --rc geninfo_all_blocks=1 00:35:50.432 --rc geninfo_unexecuted_blocks=1 00:35:50.432 00:35:50.432 ' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:50.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.433 --rc genhtml_branch_coverage=1 00:35:50.433 --rc genhtml_function_coverage=1 00:35:50.433 --rc genhtml_legend=1 00:35:50.433 --rc geninfo_all_blocks=1 00:35:50.433 --rc geninfo_unexecuted_blocks=1 00:35:50.433 00:35:50.433 ' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.433 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.433 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:53.024 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:53.024 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:53.024 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:53.025 Found net devices under 0000:84:00.0: cvl_0_0 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:53.025 Found net devices under 0000:84:00.1: cvl_0_1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:35:53.025 00:35:53.025 --- 10.0.0.2 ping statistics --- 00:35:53.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.025 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:35:53.025 00:35:53.025 --- 10.0.0.1 ping statistics --- 00:35:53.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.025 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1700891 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1700891 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1700891 ']' 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.025 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.025 [2024-10-07 09:56:47.726795] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:53.025 [2024-10-07 09:56:47.728363] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:53.025 [2024-10-07 09:56:47.728439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.285 [2024-10-07 09:56:47.842532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.285 [2024-10-07 09:56:48.027467] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.285 [2024-10-07 09:56:48.027543] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.285 [2024-10-07 09:56:48.027572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.285 [2024-10-07 09:56:48.027583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.285 [2024-10-07 09:56:48.027593] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.285 [2024-10-07 09:56:48.028401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.545 [2024-10-07 09:56:48.161985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:53.545 [2024-10-07 09:56:48.162497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 [2024-10-07 09:56:48.221347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 Malloc0 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.545 [2024-10-07 09:56:48.297633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1701034 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1701034 /var/tmp/bdevperf.sock 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1701034 ']' 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.545 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.802 [2024-10-07 09:56:48.378047] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:35:53.802 [2024-10-07 09:56:48.378125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701034 ] 00:35:53.802 [2024-10-07 09:56:48.461642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.802 [2024-10-07 09:56:48.583276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.368 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.368 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:35:54.368 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:54.368 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.368 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:54.368 NVMe0n1 00:35:54.368 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.368 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:54.627 Running I/O for 10 seconds... 00:36:04.867 7945.00 IOPS, 31.04 MiB/s 8165.00 IOPS, 31.89 MiB/s 8172.67 IOPS, 31.92 MiB/s 8187.25 IOPS, 31.98 MiB/s 8195.60 IOPS, 32.01 MiB/s 8374.00 IOPS, 32.71 MiB/s 8414.71 IOPS, 32.87 MiB/s 8427.50 IOPS, 32.92 MiB/s 8420.22 IOPS, 32.89 MiB/s 8402.50 IOPS, 32.82 MiB/s 00:36:04.867 Latency(us) 00:36:04.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:36:04.867 Verification LBA range: start 0x0 length 0x4000 00:36:04.867 NVMe0n1 : 10.08 8434.81 32.95 0.00 0.00 120884.58 23592.96 74565.40 00:36:04.867 =================================================================================================================== 00:36:04.867 Total : 8434.81 32.95 0.00 0.00 120884.58 23592.96 74565.40 00:36:04.867 { 00:36:04.867 "results": [ 00:36:04.867 { 00:36:04.867 "job": "NVMe0n1", 00:36:04.867 "core_mask": "0x1", 00:36:04.867 "workload": "verify", 00:36:04.867 "status": "finished", 00:36:04.867 "verify_range": { 00:36:04.867 "start": 0, 00:36:04.867 "length": 16384 00:36:04.867 }, 00:36:04.867 "queue_depth": 1024, 00:36:04.867 "io_size": 4096, 00:36:04.867 "runtime": 10.082268, 00:36:04.867 "iops": 8434.80851728996, 00:36:04.867 "mibps": 32.9484707706639, 00:36:04.867 "io_failed": 0, 00:36:04.867 "io_timeout": 0, 00:36:04.867 "avg_latency_us": 120884.58093682688, 00:36:04.867 "min_latency_us": 23592.96, 00:36:04.867 "max_latency_us": 74565.40444444444 00:36:04.868 } 00:36:04.868 ], 00:36:04.868 "core_count": 1 00:36:04.868 } 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1701034 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1701034 ']' 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1701034 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701034 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701034' 00:36:04.868 killing process with pid 1701034 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1701034 00:36:04.868 Received shutdown signal, test time was about 10.000000 seconds 00:36:04.868 00:36:04.868 Latency(us) 00:36:04.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.868 =================================================================================================================== 00:36:04.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:04.868 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1701034 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:05.126 rmmod nvme_tcp 00:36:05.126 rmmod nvme_fabrics 00:36:05.126 rmmod nvme_keyring 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1700891 ']' 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1700891 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1700891 ']' 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1700891 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1700891 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1700891' 00:36:05.126 killing process with pid 1700891 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1700891 00:36:05.126 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1700891 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.385 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:07.917 00:36:07.917 real 0m17.479s 00:36:07.917 user 0m23.718s 00:36:07.917 sys 0m4.387s 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.917 ************************************ 00:36:07.917 END TEST nvmf_queue_depth 00:36:07.917 ************************************ 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:07.917 ************************************ 00:36:07.917 START TEST nvmf_target_multipath 00:36:07.917 ************************************ 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:07.917 * Looking for test storage... 00:36:07.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:07.917 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.918 --rc genhtml_branch_coverage=1 00:36:07.918 --rc genhtml_function_coverage=1 00:36:07.918 --rc genhtml_legend=1 00:36:07.918 --rc geninfo_all_blocks=1 00:36:07.918 --rc geninfo_unexecuted_blocks=1 00:36:07.918 00:36:07.918 ' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.918 --rc genhtml_branch_coverage=1 00:36:07.918 --rc genhtml_function_coverage=1 00:36:07.918 --rc genhtml_legend=1 00:36:07.918 --rc geninfo_all_blocks=1 00:36:07.918 --rc geninfo_unexecuted_blocks=1 00:36:07.918 00:36:07.918 ' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.918 --rc genhtml_branch_coverage=1 00:36:07.918 --rc genhtml_function_coverage=1 00:36:07.918 --rc genhtml_legend=1 00:36:07.918 --rc geninfo_all_blocks=1 00:36:07.918 --rc geninfo_unexecuted_blocks=1 00:36:07.918 00:36:07.918 ' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.918 --rc genhtml_branch_coverage=1 00:36:07.918 --rc genhtml_function_coverage=1 00:36:07.918 --rc genhtml_legend=1 00:36:07.918 --rc geninfo_all_blocks=1 00:36:07.918 --rc geninfo_unexecuted_blocks=1 00:36:07.918 00:36:07.918 ' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.918 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.919 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:10.451 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:10.451 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:10.451 Found net devices under 0000:84:00.0: cvl_0_0 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:10.451 Found net devices under 0000:84:00.1: cvl_0_1 00:36:10.451 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.452 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:10.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:36:10.452 00:36:10.452 --- 10.0.0.2 ping statistics --- 00:36:10.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.452 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:10.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:36:10.452 00:36:10.452 --- 10.0.0.1 ping statistics --- 00:36:10.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.452 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:10.452 only one NIC for nvmf test 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.452 rmmod nvme_tcp 00:36:10.452 rmmod nvme_fabrics 00:36:10.452 rmmod nvme_keyring 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.452 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.988 00:36:12.988 real 0m5.027s 00:36:12.988 user 0m0.971s 00:36:12.988 sys 0m2.057s 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 ************************************ 00:36:12.988 END TEST nvmf_target_multipath 00:36:12.988 ************************************ 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 ************************************ 00:36:12.988 START TEST nvmf_zcopy 00:36:12.988 ************************************ 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:12.988 * Looking for test storage... 00:36:12.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:12.988 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.989 --rc genhtml_branch_coverage=1 00:36:12.989 --rc genhtml_function_coverage=1 00:36:12.989 --rc genhtml_legend=1 00:36:12.989 --rc geninfo_all_blocks=1 00:36:12.989 --rc geninfo_unexecuted_blocks=1 00:36:12.989 00:36:12.989 ' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.989 --rc genhtml_branch_coverage=1 00:36:12.989 --rc genhtml_function_coverage=1 00:36:12.989 --rc genhtml_legend=1 00:36:12.989 --rc geninfo_all_blocks=1 00:36:12.989 --rc geninfo_unexecuted_blocks=1 00:36:12.989 00:36:12.989 ' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.989 --rc genhtml_branch_coverage=1 00:36:12.989 --rc genhtml_function_coverage=1 00:36:12.989 --rc genhtml_legend=1 00:36:12.989 --rc geninfo_all_blocks=1 00:36:12.989 --rc geninfo_unexecuted_blocks=1 00:36:12.989 00:36:12.989 ' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.989 --rc genhtml_branch_coverage=1 00:36:12.989 --rc genhtml_function_coverage=1 00:36:12.989 --rc genhtml_legend=1 00:36:12.989 --rc geninfo_all_blocks=1 00:36:12.989 --rc geninfo_unexecuted_blocks=1 00:36:12.989 00:36:12.989 ' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.989 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:15.522 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:15.522 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:15.522 Found net devices under 0000:84:00.0: cvl_0_0 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:15.522 Found net devices under 0000:84:00.1: cvl_0_1 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.522 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:36:15.523 00:36:15.523 --- 10.0.0.2 ping statistics --- 00:36:15.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.523 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:36:15.523 00:36:15.523 --- 10.0.0.1 ping statistics --- 00:36:15.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.523 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:15.523 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1706859 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1706859 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1706859 ']' 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.782 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.782 [2024-10-07 09:57:10.460174] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.782 [2024-10-07 09:57:10.462441] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:36:15.782 [2024-10-07 09:57:10.462559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.782 [2024-10-07 09:57:10.592033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.041 [2024-10-07 09:57:10.768431] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.041 [2024-10-07 09:57:10.768480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.041 [2024-10-07 09:57:10.768494] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.041 [2024-10-07 09:57:10.768506] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.041 [2024-10-07 09:57:10.768531] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.041 [2024-10-07 09:57:10.769144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.300 [2024-10-07 09:57:10.867788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:16.300 [2024-10-07 09:57:10.868124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 [2024-10-07 09:57:11.821959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 [2024-10-07 09:57:11.842173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 malloc0 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:17.237 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:17.237 { 00:36:17.237 "params": { 00:36:17.237 "name": "Nvme$subsystem", 00:36:17.237 "trtype": "$TEST_TRANSPORT", 00:36:17.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.237 "adrfam": "ipv4", 00:36:17.237 "trsvcid": "$NVMF_PORT", 00:36:17.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.237 "hdgst": ${hdgst:-false}, 00:36:17.237 "ddgst": ${ddgst:-false} 00:36:17.237 }, 00:36:17.237 "method": "bdev_nvme_attach_controller" 00:36:17.237 } 00:36:17.237 EOF 00:36:17.237 )") 00:36:17.238 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:36:17.238 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:36:17.238 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:36:17.238 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:17.238 "params": { 00:36:17.238 "name": "Nvme1", 00:36:17.238 "trtype": "tcp", 00:36:17.238 "traddr": "10.0.0.2", 00:36:17.238 "adrfam": "ipv4", 00:36:17.238 "trsvcid": "4420", 00:36:17.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:17.238 "hdgst": false, 00:36:17.238 "ddgst": false 00:36:17.238 }, 00:36:17.238 "method": "bdev_nvme_attach_controller" 00:36:17.238 }' 00:36:17.238 [2024-10-07 09:57:11.993625] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:36:17.238 [2024-10-07 09:57:11.993817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707142 ] 00:36:17.496 [2024-10-07 09:57:12.103782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.496 [2024-10-07 09:57:12.228122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.063 Running I/O for 10 seconds... 00:36:27.878 5100.00 IOPS, 39.84 MiB/s 5138.00 IOPS, 40.14 MiB/s 5155.67 IOPS, 40.28 MiB/s 5163.75 IOPS, 40.34 MiB/s 5170.80 IOPS, 40.40 MiB/s 5171.83 IOPS, 40.40 MiB/s 5178.71 IOPS, 40.46 MiB/s 5177.38 IOPS, 40.45 MiB/s 5202.67 IOPS, 40.65 MiB/s 5237.70 IOPS, 40.92 MiB/s 00:36:27.878 Latency(us) 00:36:27.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:27.878 Verification LBA range: start 0x0 length 0x1000 00:36:27.878 Nvme1n1 : 10.06 5221.46 40.79 0.00 0.00 24350.26 1662.67 41166.32 00:36:27.878 =================================================================================================================== 00:36:27.878 Total : 5221.46 40.79 0.00 0.00 24350.26 1662.67 41166.32 00:36:28.182 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1708318 00:36:28.182 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:28.182 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:28.182 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:28.182 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:28.183 { 00:36:28.183 "params": { 00:36:28.183 "name": "Nvme$subsystem", 00:36:28.183 "trtype": "$TEST_TRANSPORT", 00:36:28.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.183 "adrfam": "ipv4", 00:36:28.183 "trsvcid": "$NVMF_PORT", 00:36:28.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.183 "hdgst": ${hdgst:-false}, 00:36:28.183 "ddgst": ${ddgst:-false} 00:36:28.183 }, 00:36:28.183 "method": "bdev_nvme_attach_controller" 00:36:28.183 } 00:36:28.183 EOF 00:36:28.183 )") 00:36:28.183 [2024-10-07 09:57:22.957825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.183 [2024-10-07 09:57:22.957946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:36:28.183 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:28.183 "params": { 00:36:28.183 "name": "Nvme1", 00:36:28.183 "trtype": "tcp", 00:36:28.183 "traddr": "10.0.0.2", 00:36:28.183 "adrfam": "ipv4", 00:36:28.183 "trsvcid": "4420", 00:36:28.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:28.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:28.183 "hdgst": false, 00:36:28.183 "ddgst": false 00:36:28.183 }, 00:36:28.183 "method": "bdev_nvme_attach_controller" 00:36:28.183 }' 00:36:28.183 [2024-10-07 09:57:22.965606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.183 [2024-10-07 09:57:22.965638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.183 [2024-10-07 09:57:22.973620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.183 [2024-10-07 09:57:22.973647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:22.981603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:22.981626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:22.989633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:22.989660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.001616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.001649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.007447] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:36:28.482 [2024-10-07 09:57:23.007535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708318 ] 00:36:28.482 [2024-10-07 09:57:23.009608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.009633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.017616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.017642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.029747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.029803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.041753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.041813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.049618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.049644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.057617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.057642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.065617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.065642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.073617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.073642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.081481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.482 [2024-10-07 09:57:23.081616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.081639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.089649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.089689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.097641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.097677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.105618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.105645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.113617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.113643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.121617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.482 [2024-10-07 09:57:23.121644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.482 [2024-10-07 09:57:23.129617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.129643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.137617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.137649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.145617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.145643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.153638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.153674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.161611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.161637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.169616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.169642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.177615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.177640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.185613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.185639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.193614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.193638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.201614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.201640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.205850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.483 [2024-10-07 09:57:23.209614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.209638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.217616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.217642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.225636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.225669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.233639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.233676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.241637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.241674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.249641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.249679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.257638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.257688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.265636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.265673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.273642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.273680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.281614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.281640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.289640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.289677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.483 [2024-10-07 09:57:23.297641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.483 [2024-10-07 09:57:23.297680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.305642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.305679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.313614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.313640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.321613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.321638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.329625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.329657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.337624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.337653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.345621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.345650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.353622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.353650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.361616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.361642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.369615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.369642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.377614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.377639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.385615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.385639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.393619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.393646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.401620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.401647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.409623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.409659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.417621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.417651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.464280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.464312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.469621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.469649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.477622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.477651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 Running I/O for 5 seconds... 00:36:28.742 [2024-10-07 09:57:23.493864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.493899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.506040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.742 [2024-10-07 09:57:23.506067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.742 [2024-10-07 09:57:23.519486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.743 [2024-10-07 09:57:23.519518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.743 [2024-10-07 09:57:23.532003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.743 [2024-10-07 09:57:23.532030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.743 [2024-10-07 09:57:23.548582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.743 [2024-10-07 09:57:23.548614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.559516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.559548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.576068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.576097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.588689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.588720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.601149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.601194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.614153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.614204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.625500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.625534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.638852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.638885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.656512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.656543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.668466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.668498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.680052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.680080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.693270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.693302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.705725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.001 [2024-10-07 09:57:23.705757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.001 [2024-10-07 09:57:23.718113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.718139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.736258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.736291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.748259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.748291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.762959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.762986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.774071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.774099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.787509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.787541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.800097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.800125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.002 [2024-10-07 09:57:23.814977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.002 [2024-10-07 09:57:23.815005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.825809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.825848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.839362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.839393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.852277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.852309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.867076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.867103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.877677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.877703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.890571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.890603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.903109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.903136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.915366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.915398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.929966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.929992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.941274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.941306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.954915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.954957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.970324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.970355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.981383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.981415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:23.995043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:23.995072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:24.008167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.261 [2024-10-07 09:57:24.008209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.261 [2024-10-07 09:57:24.020692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.262 [2024-10-07 09:57:24.020724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.262 [2024-10-07 09:57:24.033202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.262 [2024-10-07 09:57:24.033228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.262 [2024-10-07 09:57:24.045849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.262 [2024-10-07 09:57:24.045880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.262 [2024-10-07 09:57:24.063970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.262 [2024-10-07 09:57:24.063997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.262 [2024-10-07 09:57:24.076223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.262 [2024-10-07 09:57:24.076254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.520 [2024-10-07 09:57:24.088762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.520 [2024-10-07 09:57:24.088794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.520 [2024-10-07 09:57:24.101031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.520 [2024-10-07 09:57:24.101058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.520 [2024-10-07 09:57:24.113451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.520 [2024-10-07 09:57:24.113483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.520 [2024-10-07 09:57:24.125251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.520 [2024-10-07 09:57:24.125283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.520 [2024-10-07 09:57:24.137730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.520 [2024-10-07 09:57:24.137762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.149993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.150019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.162771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.162802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.175195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.175221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.187352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.187383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.200218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.200267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.212527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.212558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.225080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.225107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.237541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.237573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.249595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.249631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.261876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.261915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.274117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.274143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.286866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.286979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.299629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.299660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.312265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.312296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.521 [2024-10-07 09:57:24.327116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.521 [2024-10-07 09:57:24.327142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.338052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.338078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.350969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.350995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.361917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.361943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.373204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.373229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.386677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.386709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.779 [2024-10-07 09:57:24.397647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.779 [2024-10-07 09:57:24.397678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.410315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.410347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.422734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.422765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.434883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.434928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.447253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.447285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.459759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.459785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.471673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.471705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 10166.00 IOPS, 79.42 MiB/s [2024-10-07 09:57:24.486610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.486644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.497608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.497640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.511052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.511080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.523459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.523490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.536587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.536614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.551345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.551376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.562669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.562700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.576456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.576488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.780 [2024-10-07 09:57:24.588447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.780 [2024-10-07 09:57:24.588472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.603011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.603040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.612526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.612551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.624613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.624638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.638396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.638437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.649104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.649130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.038 [2024-10-07 09:57:24.662843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.038 [2024-10-07 09:57:24.662874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.674923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.674966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.687192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.687217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.699646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.699676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.714094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.714120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.724700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.724730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.738183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.738213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.749663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.749693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.762708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.762738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.780555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.780586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.791828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.791859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.805590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.805620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.818019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.818045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.830659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.830689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.039 [2024-10-07 09:57:24.842733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.039 [2024-10-07 09:57:24.842763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.855101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.855128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.868521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.868551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.884070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.884103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.895107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.895133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.910861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.910977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.922584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.922614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.941283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.941314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.953473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.953503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.965502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.965536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.977441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.977472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:24.989322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:24.989352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:25.001518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.297 [2024-10-07 09:57:25.001545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.297 [2024-10-07 09:57:25.014019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.014046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.026691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.026721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.044779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.044810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.055871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.055913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.068072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.068097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.082858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.082888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.093741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.093771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.298 [2024-10-07 09:57:25.107396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.298 [2024-10-07 09:57:25.107426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.119668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.119699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.134071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.134103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.145315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.145346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.158961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.158987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.171001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.171026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.183586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.183616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.198343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.198373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.209116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.209142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.222484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.222514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.235264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.235295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.248036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.248062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.260754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.556 [2024-10-07 09:57:25.260785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.556 [2024-10-07 09:57:25.274887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.274931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.285343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.285374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.298660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.298685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.311754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.311783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.324378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.324408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.339287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.339317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.349997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.350022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.557 [2024-10-07 09:57:25.363843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.557 [2024-10-07 09:57:25.363873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.376780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.815 [2024-10-07 09:57:25.376810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.389124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.815 [2024-10-07 09:57:25.389149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.401955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.815 [2024-10-07 09:57:25.401981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.414050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.815 [2024-10-07 09:57:25.414075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.431166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.815 [2024-10-07 09:57:25.431210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.815 [2024-10-07 09:57:25.442157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.442197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.455005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.455029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.467619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.467649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.482302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.482332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 10207.00 IOPS, 79.74 MiB/s [2024-10-07 09:57:25.493480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.493510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.507145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.507184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.519820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.519851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.532498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.532528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.547028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.547054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.558394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.558424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.574635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.574665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.586205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.586228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.604087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.604113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.616069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.616094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.816 [2024-10-07 09:57:25.628015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.816 [2024-10-07 09:57:25.628042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.640098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.640124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.653234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.653273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.665642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.665673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.677734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.677766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.690616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.690647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.702767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.702798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.715441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.715472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.727999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.728027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.743848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.743878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.754545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.754575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.770502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.770531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.782437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.782467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.795100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.795125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.807809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.807840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.822553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.822583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.833078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.833103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.845854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.845883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.858077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.858110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.870766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.870795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.075 [2024-10-07 09:57:25.887669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.075 [2024-10-07 09:57:25.887699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.898627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.898651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.912081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.912106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.924792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.924822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.938987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.939013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.949852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.949881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.963101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.963126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.975361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.975391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:25.987977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:25.988003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.000865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.000904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.013801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.013832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.025532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.025557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.037871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.037913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.050284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.050314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.063021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.063047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.075488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.075519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.088076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.088103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.100625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.100665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.113023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.113049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.125246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.125277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.334 [2024-10-07 09:57:26.137997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.334 [2024-10-07 09:57:26.138023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.151407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.151438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.166546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.166577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.177677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.177708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.191199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.191230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.203730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.203762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.218465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.218494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.229446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.229476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.242994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.243020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.255627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.255657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.270693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.270723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.281792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.281822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.295486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.295517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.307389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.307424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.319489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.319520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.335968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.335994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.347068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.347102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.360569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.360599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.374629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.374660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.385515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.593 [2024-10-07 09:57:26.385547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.593 [2024-10-07 09:57:26.398282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.594 [2024-10-07 09:57:26.398312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.410287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.410317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.423107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.423133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.435499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.435529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.447699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.447729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.460751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.460782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.473425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.473457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.486071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.486096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 10213.67 IOPS, 79.79 MiB/s [2024-10-07 09:57:26.497846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.497875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.510918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.510958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.528268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.528299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.539416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.539447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.552934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.552960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.564790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.564820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.577658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.577689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.590281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.590320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.607755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.607784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.852 [2024-10-07 09:57:26.623396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.852 [2024-10-07 09:57:26.623426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.853 [2024-10-07 09:57:26.634009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.853 [2024-10-07 09:57:26.634034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.853 [2024-10-07 09:57:26.647601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.853 [2024-10-07 09:57:26.647631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.853 [2024-10-07 09:57:26.662300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.853 [2024-10-07 09:57:26.662331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.673143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.673169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.686698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.686729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.699155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.699195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.712222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.712264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.726448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.726478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.737252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.737283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.749864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.749913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.761162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.761202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.772223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.772249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.785271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.785296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.795352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.795377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.807676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.807702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.819140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.819182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.111 [2024-10-07 09:57:26.830503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.111 [2024-10-07 09:57:26.830528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.842411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.842436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.853368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.853393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.864845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.864870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.876416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.876442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.889459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.889484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.899455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.899480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.911511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.911535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.112 [2024-10-07 09:57:26.922542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.112 [2024-10-07 09:57:26.922566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.370 [2024-10-07 09:57:26.934726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.370 [2024-10-07 09:57:26.934751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.370 [2024-10-07 09:57:26.946085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:26.946111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:26.956562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:26.956587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:26.968309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:26.968335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:26.983632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:26.983658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:26.993847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:26.993887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.005493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.005517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.016388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.016413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.031323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.031347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.041257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.041284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.053658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.053684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.064982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.065009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.080093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.080120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.091612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.091637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.107217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.107257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.118046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.118072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.129914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.129941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.141084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.141110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.155041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.155068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.165321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.165346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.371 [2024-10-07 09:57:27.177099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.371 [2024-10-07 09:57:27.177126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.190699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.629 [2024-10-07 09:57:27.190724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.200690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.629 [2024-10-07 09:57:27.200716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.212845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.629 [2024-10-07 09:57:27.212885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.225904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.629 [2024-10-07 09:57:27.225930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.236501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.629 [2024-10-07 09:57:27.236526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.629 [2024-10-07 09:57:27.248727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.248752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.263270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.263296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.273114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.273140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.285927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.285968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.297163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.297203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.308517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.308542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.321966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.321993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.332414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.332439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.344303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.344328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.357112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.357139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.367507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.367532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.383405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.383430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.394668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.394693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.406021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.406047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.416552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.416577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.428529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.428555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.630 [2024-10-07 09:57:27.441759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.630 [2024-10-07 09:57:27.441785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.452480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.452505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.464405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.464431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.476329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.476355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.487589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.487614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 10395.50 IOPS, 81.21 MiB/s [2024-10-07 09:57:27.499398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.499431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.511660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.511686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.522664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.522689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.534367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.534391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.545459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.545484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.556105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.556131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.569154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.888 [2024-10-07 09:57:27.569196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.888 [2024-10-07 09:57:27.578454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.578479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.590812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.590837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.602092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.602119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.612020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.612045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.624832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.624856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.638209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.638234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.648148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.648191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.660759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.660784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.674124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.674150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.684522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.684547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.889 [2024-10-07 09:57:27.697318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.889 [2024-10-07 09:57:27.697343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.709989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.710015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.721114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.721151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.733375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.733399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.745035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.745061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.757625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.757650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.768342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.768366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.780017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.780044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.793532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.147 [2024-10-07 09:57:27.793558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.147 [2024-10-07 09:57:27.803115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.803141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.815624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.815648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.826715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.826740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.838392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.838417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.849888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.849922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.861563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.861587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.873589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.873614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.885488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.885513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.897478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.897503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.909577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.909608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.922204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.922229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.934345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.934375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.947348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.947388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.148 [2024-10-07 09:57:27.959874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.148 [2024-10-07 09:57:27.959936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:27.974874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:27.974912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:27.985964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:27.985990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:27.999334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:27.999365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.015867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.015909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.027212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.027254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.044035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.044063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.057482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.057514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.068253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.068284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.081445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.081477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.094050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.094077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.112016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.112044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.122994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.123020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.139696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.139728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.151819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.151849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.164701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.164731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.177436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.177466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.190057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.190082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.207507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.207538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.406 [2024-10-07 09:57:28.221401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.406 [2024-10-07 09:57:28.221432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.233033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.233059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.246760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.246789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.261865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.261905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.272907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.272955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.286602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.286632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.299079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.299105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.312015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.312041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.326861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.326898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.338744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.338775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.355167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.355206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.368003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.368029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.381049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.381076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.393949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.393974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.406502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.406532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.418333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.418363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.430552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.430582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.442730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.442760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.460741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.460771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.665 [2024-10-07 09:57:28.471520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.665 [2024-10-07 09:57:28.471549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.487689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.487719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 10415.40 IOPS, 81.37 MiB/s [2024-10-07 09:57:28.498443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.498468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 00:36:33.924 Latency(us) 00:36:33.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.924 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:33.924 Nvme1n1 : 5.01 10426.24 81.46 0.00 0.00 12261.95 3325.35 20000.62 00:36:33.924 =================================================================================================================== 00:36:33.924 Total : 10426.24 81.46 0.00 0.00 12261.95 3325.35 20000.62 00:36:33.924 [2024-10-07 09:57:28.505624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.505652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.513624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.513653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.521620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.521645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.529661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.529706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.537663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.537711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.545659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.545706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.553659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.553705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.561654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.561701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.569663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.569708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.577657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.577703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.585659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.585704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.593661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.593720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.601663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.601709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.609662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.609711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.617659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.617704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.625656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.625701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.633656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.633702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.641652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.641698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.649623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.649649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.657622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.657648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.665618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.665642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.673617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.673643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.681602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.681623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.689665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.689713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.697671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.697717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.705619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.705644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.713618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.713643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.721617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.721641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.729615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.729640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.924 [2024-10-07 09:57:28.737612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.924 [2024-10-07 09:57:28.737635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.182 [2024-10-07 09:57:28.745662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.182 [2024-10-07 09:57:28.745720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.183 [2024-10-07 09:57:28.753657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.183 [2024-10-07 09:57:28.753701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.183 [2024-10-07 09:57:28.761618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.183 [2024-10-07 09:57:28.761644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.183 [2024-10-07 09:57:28.769616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.183 [2024-10-07 09:57:28.769640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.183 [2024-10-07 09:57:28.777617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.183 [2024-10-07 09:57:28.777641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1708318) - No such process 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1708318 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:34.183 delay0 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.183 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:34.183 [2024-10-07 09:57:28.971048] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:42.296 Initializing NVMe Controllers 00:36:42.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:42.296 Initialization complete. Launching workers. 00:36:42.296 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 19569 00:36:42.296 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19682, failed to submit 119 00:36:42.296 success 19596, unsuccessful 86, failed 0 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:42.296 rmmod nvme_tcp 00:36:42.296 rmmod nvme_fabrics 00:36:42.296 rmmod nvme_keyring 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1706859 ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1706859 ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706859' 00:36:42.296 killing process with pid 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1706859 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.296 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:44.197 00:36:44.197 real 0m31.240s 00:36:44.197 user 0m41.405s 00:36:44.197 sys 0m12.280s 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:44.197 ************************************ 00:36:44.197 END TEST nvmf_zcopy 00:36:44.197 ************************************ 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:44.197 ************************************ 00:36:44.197 START TEST nvmf_nmic 00:36:44.197 ************************************ 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:44.197 * Looking for test storage... 00:36:44.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:44.197 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.198 --rc genhtml_branch_coverage=1 00:36:44.198 --rc genhtml_function_coverage=1 00:36:44.198 --rc genhtml_legend=1 00:36:44.198 --rc geninfo_all_blocks=1 00:36:44.198 --rc geninfo_unexecuted_blocks=1 00:36:44.198 00:36:44.198 ' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.198 --rc genhtml_branch_coverage=1 00:36:44.198 --rc genhtml_function_coverage=1 00:36:44.198 --rc genhtml_legend=1 00:36:44.198 --rc geninfo_all_blocks=1 00:36:44.198 --rc geninfo_unexecuted_blocks=1 00:36:44.198 00:36:44.198 ' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.198 --rc genhtml_branch_coverage=1 00:36:44.198 --rc genhtml_function_coverage=1 00:36:44.198 --rc genhtml_legend=1 00:36:44.198 --rc geninfo_all_blocks=1 00:36:44.198 --rc geninfo_unexecuted_blocks=1 00:36:44.198 00:36:44.198 ' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.198 --rc genhtml_branch_coverage=1 00:36:44.198 --rc genhtml_function_coverage=1 00:36:44.198 --rc genhtml_legend=1 00:36:44.198 --rc geninfo_all_blocks=1 00:36:44.198 --rc geninfo_unexecuted_blocks=1 00:36:44.198 00:36:44.198 ' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:44.198 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:44.199 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:46.735 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:46.735 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:46.735 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:46.736 Found net devices under 0000:84:00.0: cvl_0_0 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:46.736 Found net devices under 0000:84:00.1: cvl_0_1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.736 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.994 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.994 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.994 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.994 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:36:46.994 00:36:46.994 --- 10.0.0.2 ping statistics --- 00:36:46.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.994 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:46.994 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:36:46.994 00:36:46.994 --- 10.0.0.1 ping statistics --- 00:36:46.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.995 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1711845 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1711845 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1711845 ']' 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:46.995 09:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:46.995 [2024-10-07 09:57:41.691110] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:46.995 [2024-10-07 09:57:41.692680] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:36:46.995 [2024-10-07 09:57:41.692761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.995 [2024-10-07 09:57:41.776103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.253 [2024-10-07 09:57:41.904472] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.253 [2024-10-07 09:57:41.904544] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.253 [2024-10-07 09:57:41.904562] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.253 [2024-10-07 09:57:41.904575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.253 [2024-10-07 09:57:41.904586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.253 [2024-10-07 09:57:41.906614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.253 [2024-10-07 09:57:41.906687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.253 [2024-10-07 09:57:41.906742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.253 [2024-10-07 09:57:41.906745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.253 [2024-10-07 09:57:42.020932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.253 [2024-10-07 09:57:42.021177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:47.254 [2024-10-07 09:57:42.021484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:47.254 [2024-10-07 09:57:42.022152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:47.254 [2024-10-07 09:57:42.022418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:47.254 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:47.254 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:36:47.254 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:47.254 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.254 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 [2024-10-07 09:57:42.095540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 Malloc0 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 [2024-10-07 09:57:42.155647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:47.512 test case1: single bdev can't be used in multiple subsystems 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:47.512 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.513 [2024-10-07 09:57:42.179414] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:47.513 [2024-10-07 09:57:42.179444] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:47.513 [2024-10-07 09:57:42.179482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:47.513 request: 00:36:47.513 { 00:36:47.513 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:47.513 "namespace": { 00:36:47.513 "bdev_name": "Malloc0", 00:36:47.513 "no_auto_visible": false 00:36:47.513 }, 00:36:47.513 "method": "nvmf_subsystem_add_ns", 00:36:47.513 "req_id": 1 00:36:47.513 } 00:36:47.513 Got JSON-RPC error response 00:36:47.513 response: 00:36:47.513 { 00:36:47.513 "code": -32602, 00:36:47.513 "message": "Invalid parameters" 00:36:47.513 } 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:47.513 Adding namespace failed - expected result. 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:47.513 test case2: host connect to nvmf target in multiple paths 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.513 [2024-10-07 09:57:42.187501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.513 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:47.770 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:48.027 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:48.027 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:36:48.027 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:48.027 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:48.027 09:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:36:49.927 09:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:49.927 [global] 00:36:49.927 thread=1 00:36:49.927 invalidate=1 00:36:49.927 rw=write 00:36:49.927 time_based=1 00:36:49.927 runtime=1 00:36:49.927 ioengine=libaio 00:36:49.927 direct=1 00:36:49.927 bs=4096 00:36:49.927 iodepth=1 00:36:49.927 norandommap=0 00:36:49.927 numjobs=1 00:36:49.927 00:36:49.927 verify_dump=1 00:36:49.927 verify_backlog=512 00:36:49.927 verify_state_save=0 00:36:49.927 do_verify=1 00:36:49.927 verify=crc32c-intel 00:36:49.927 [job0] 00:36:49.927 filename=/dev/nvme0n1 00:36:49.927 Could not set queue depth (nvme0n1) 00:36:50.185 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:50.185 fio-3.35 00:36:50.185 Starting 1 thread 00:36:51.558 00:36:51.558 job0: (groupid=0, jobs=1): err= 0: pid=1712339: Mon Oct 7 09:57:46 2024 00:36:51.558 read: IOPS=23, BW=92.5KiB/s (94.7kB/s)(96.0KiB/1038msec) 00:36:51.558 slat (nsec): min=9920, max=29175, avg=19665.88, stdev=4994.35 00:36:51.558 clat (usec): min=317, max=41040, avg=39258.49, stdev=8294.80 00:36:51.558 lat (usec): min=344, max=41058, avg=39278.16, stdev=8293.25 00:36:51.558 clat percentiles (usec): 00:36:51.558 | 1.00th=[ 318], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:51.558 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.558 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.558 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:51.558 | 99.99th=[41157] 00:36:51.558 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:36:51.558 slat (nsec): min=10299, max=48578, avg=11667.90, stdev=2405.71 00:36:51.558 clat (usec): min=149, max=362, avg=170.79, stdev=17.85 00:36:51.558 lat (usec): min=160, max=410, avg=182.46, stdev=19.00 00:36:51.558 clat percentiles (usec): 00:36:51.558 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:36:51.558 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:36:51.558 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 198], 00:36:51.558 | 99.00th=[ 221], 99.50th=[ 281], 99.90th=[ 363], 99.95th=[ 363], 00:36:51.558 | 99.99th=[ 363] 00:36:51.558 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:51.558 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:51.558 lat (usec) : 250=94.96%, 500=0.75% 00:36:51.558 lat (msec) : 50=4.29% 00:36:51.558 cpu : usr=0.68%, sys=0.48%, ctx=536, majf=0, minf=1 00:36:51.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.558 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.558 00:36:51.558 Run status group 0 (all jobs): 00:36:51.558 READ: bw=92.5KiB/s (94.7kB/s), 92.5KiB/s-92.5KiB/s (94.7kB/s-94.7kB/s), io=96.0KiB (98.3kB), run=1038-1038msec 00:36:51.558 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:36:51.558 00:36:51.558 Disk stats (read/write): 00:36:51.558 nvme0n1: ios=70/512, merge=0/0, ticks=807/86, in_queue=893, util=91.48% 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:51.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:51.558 rmmod nvme_tcp 00:36:51.558 rmmod nvme_fabrics 00:36:51.558 rmmod nvme_keyring 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1711845 ']' 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1711845 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1711845 ']' 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1711845 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711845 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711845' 00:36:51.558 killing process with pid 1711845 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1711845 00:36:51.558 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1711845 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.126 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.028 00:36:54.028 real 0m10.098s 00:36:54.028 user 0m17.815s 00:36:54.028 sys 0m3.902s 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:54.028 ************************************ 00:36:54.028 END TEST nvmf_nmic 00:36:54.028 ************************************ 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:54.028 ************************************ 00:36:54.028 START TEST nvmf_fio_target 00:36:54.028 ************************************ 00:36:54.028 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:54.287 * Looking for test storage... 00:36:54.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.287 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:54.287 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:36:54.287 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.287 --rc genhtml_branch_coverage=1 00:36:54.287 --rc genhtml_function_coverage=1 00:36:54.287 --rc genhtml_legend=1 00:36:54.287 --rc geninfo_all_blocks=1 00:36:54.287 --rc geninfo_unexecuted_blocks=1 00:36:54.287 00:36:54.287 ' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.287 --rc genhtml_branch_coverage=1 00:36:54.287 --rc genhtml_function_coverage=1 00:36:54.287 --rc genhtml_legend=1 00:36:54.287 --rc geninfo_all_blocks=1 00:36:54.287 --rc geninfo_unexecuted_blocks=1 00:36:54.287 00:36:54.287 ' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.287 --rc genhtml_branch_coverage=1 00:36:54.287 --rc genhtml_function_coverage=1 00:36:54.287 --rc genhtml_legend=1 00:36:54.287 --rc geninfo_all_blocks=1 00:36:54.287 --rc geninfo_unexecuted_blocks=1 00:36:54.287 00:36:54.287 ' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:54.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.287 --rc genhtml_branch_coverage=1 00:36:54.287 --rc genhtml_function_coverage=1 00:36:54.287 --rc genhtml_legend=1 00:36:54.287 --rc geninfo_all_blocks=1 00:36:54.287 --rc geninfo_unexecuted_blocks=1 00:36:54.287 00:36:54.287 ' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.287 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.288 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:57.573 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:57.573 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.573 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:57.574 Found net devices under 0000:84:00.0: cvl_0_0 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:57.574 Found net devices under 0000:84:00.1: cvl_0_1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:36:57.574 00:36:57.574 --- 10.0.0.2 ping statistics --- 00:36:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.574 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:36:57.574 00:36:57.574 --- 10.0.0.1 ping statistics --- 00:36:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.574 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1714543 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1714543 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1714543 ']' 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:57.574 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.574 [2024-10-07 09:57:51.904679] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.574 [2024-10-07 09:57:51.905934] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:36:57.574 [2024-10-07 09:57:51.905993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.574 [2024-10-07 09:57:51.982981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:57.574 [2024-10-07 09:57:52.100130] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.574 [2024-10-07 09:57:52.100199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.574 [2024-10-07 09:57:52.100217] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.574 [2024-10-07 09:57:52.100231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.574 [2024-10-07 09:57:52.100242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.574 [2024-10-07 09:57:52.102084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.574 [2024-10-07 09:57:52.102138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:57.574 [2024-10-07 09:57:52.102197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:57.574 [2024-10-07 09:57:52.102201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.574 [2024-10-07 09:57:52.212602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:57.574 [2024-10-07 09:57:52.212849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:57.574 [2024-10-07 09:57:52.213156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:57.574 [2024-10-07 09:57:52.213828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.575 [2024-10-07 09:57:52.214098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.575 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:57.834 [2024-10-07 09:57:52.578975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.834 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:58.401 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:58.401 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:58.661 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:58.661 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:59.228 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:59.228 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:59.795 09:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:59.795 09:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:00.362 09:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:00.620 09:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:00.620 09:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.186 09:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:01.186 09:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.444 09:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:01.444 09:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:01.703 09:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:01.960 09:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:01.960 09:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.526 09:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:02.526 09:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:02.784 09:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.042 [2024-10-07 09:57:57.767078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.042 09:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:03.608 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:03.868 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:37:04.125 09:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:37:06.064 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:06.064 [global] 00:37:06.064 thread=1 00:37:06.064 invalidate=1 00:37:06.064 rw=write 00:37:06.064 time_based=1 00:37:06.064 runtime=1 00:37:06.064 ioengine=libaio 00:37:06.064 direct=1 00:37:06.064 bs=4096 00:37:06.064 iodepth=1 00:37:06.064 norandommap=0 00:37:06.064 numjobs=1 00:37:06.064 00:37:06.064 verify_dump=1 00:37:06.064 verify_backlog=512 00:37:06.064 verify_state_save=0 00:37:06.064 do_verify=1 00:37:06.064 verify=crc32c-intel 00:37:06.064 [job0] 00:37:06.064 filename=/dev/nvme0n1 00:37:06.064 [job1] 00:37:06.064 filename=/dev/nvme0n2 00:37:06.064 [job2] 00:37:06.064 filename=/dev/nvme0n3 00:37:06.064 [job3] 00:37:06.064 filename=/dev/nvme0n4 00:37:06.348 Could not set queue depth (nvme0n1) 00:37:06.348 Could not set queue depth (nvme0n2) 00:37:06.348 Could not set queue depth (nvme0n3) 00:37:06.348 Could not set queue depth (nvme0n4) 00:37:06.348 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.348 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.348 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.348 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.348 fio-3.35 00:37:06.348 Starting 4 threads 00:37:07.721 00:37:07.721 job0: (groupid=0, jobs=1): err= 0: pid=1715630: Mon Oct 7 09:58:02 2024 00:37:07.721 read: IOPS=147, BW=590KiB/s (604kB/s)(596KiB/1010msec) 00:37:07.721 slat (nsec): min=7231, max=40224, avg=12851.97, stdev=5215.00 00:37:07.721 clat (usec): min=242, max=41344, avg=5855.08, stdev=13873.63 00:37:07.721 lat (usec): min=255, max=41353, avg=5867.94, stdev=13874.51 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 326], 00:37:07.721 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 461], 00:37:07.721 | 70.00th=[ 498], 80.00th=[ 578], 90.00th=[41157], 95.00th=[41157], 00:37:07.721 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:07.721 | 99.99th=[41157] 00:37:07.721 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:37:07.721 slat (nsec): min=8845, max=36970, avg=13129.37, stdev=4727.89 00:37:07.721 clat (usec): min=175, max=411, avg=245.75, stdev=46.98 00:37:07.721 lat (usec): min=188, max=426, avg=258.88, stdev=47.21 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 210], 00:37:07.721 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 239], 00:37:07.721 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 334], 00:37:07.721 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 412], 99.95th=[ 412], 00:37:07.721 | 99.99th=[ 412] 00:37:07.721 bw ( KiB/s): min= 4096, max= 4096, per=21.74%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.721 lat (usec) : 250=49.92%, 500=43.72%, 750=3.33% 00:37:07.721 lat (msec) : 50=3.03% 00:37:07.721 cpu : usr=0.69%, sys=0.89%, ctx=662, majf=0, minf=2 00:37:07.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.721 job1: (groupid=0, jobs=1): err= 0: pid=1715631: Mon Oct 7 09:58:02 2024 00:37:07.721 read: IOPS=257, BW=1030KiB/s (1055kB/s)(1056KiB/1025msec) 00:37:07.721 slat (nsec): min=8976, max=41434, avg=11043.19, stdev=4061.19 00:37:07.721 clat (usec): min=225, max=42009, avg=3347.13, stdev=10816.64 00:37:07.721 lat (usec): min=235, max=42023, avg=3358.18, stdev=10818.22 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:37:07.721 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:37:07.721 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[41157], 00:37:07.721 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:07.721 | 99.99th=[42206] 00:37:07.721 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:37:07.721 slat (nsec): min=9349, max=45557, avg=12403.05, stdev=4659.51 00:37:07.721 clat (usec): min=173, max=3351, avg=251.60, stdev=146.16 00:37:07.721 lat (usec): min=183, max=3362, avg=264.00, stdev=146.25 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:37:07.721 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 243], 00:37:07.721 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 388], 00:37:07.721 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 3359], 99.95th=[ 3359], 00:37:07.721 | 99.99th=[ 3359] 00:37:07.721 bw ( KiB/s): min= 4096, max= 4096, per=21.74%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.721 lat (usec) : 250=60.70%, 500=36.60% 00:37:07.721 lat (msec) : 4=0.13%, 50=2.58% 00:37:07.721 cpu : usr=0.49%, sys=0.68%, ctx=777, majf=0, minf=1 00:37:07.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 issued rwts: total=264,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.721 job2: (groupid=0, jobs=1): err= 0: pid=1715632: Mon Oct 7 09:58:02 2024 00:37:07.721 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:37:07.721 slat (nsec): min=5914, max=82837, avg=13134.83, stdev=6344.57 00:37:07.721 clat (usec): min=227, max=622, avg=354.13, stdev=78.15 00:37:07.721 lat (usec): min=237, max=632, avg=367.26, stdev=79.12 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:37:07.721 | 30.00th=[ 293], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 359], 00:37:07.721 | 70.00th=[ 396], 80.00th=[ 433], 90.00th=[ 465], 95.00th=[ 498], 00:37:07.721 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 627], 00:37:07.721 | 99.99th=[ 627] 00:37:07.721 write: IOPS=1893, BW=7572KiB/s (7754kB/s)(7572KiB/1000msec); 0 zone resets 00:37:07.721 slat (nsec): min=7595, max=56261, avg=12812.47, stdev=4363.31 00:37:07.721 clat (usec): min=159, max=621, avg=211.01, stdev=34.39 00:37:07.721 lat (usec): min=169, max=633, avg=223.82, stdev=34.78 00:37:07.721 clat percentiles (usec): 00:37:07.721 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 182], 00:37:07.721 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:37:07.721 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 273], 00:37:07.721 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 482], 99.95th=[ 619], 00:37:07.721 | 99.99th=[ 619] 00:37:07.721 bw ( KiB/s): min= 8192, max= 8192, per=43.48%, avg=8192.00, stdev= 0.00, samples=1 00:37:07.721 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:07.721 lat (usec) : 250=49.66%, 500=48.29%, 750=2.04% 00:37:07.721 cpu : usr=2.20%, sys=4.30%, ctx=3432, majf=0, minf=1 00:37:07.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.721 issued rwts: total=1536,1893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.722 job3: (groupid=0, jobs=1): err= 0: pid=1715633: Mon Oct 7 09:58:02 2024 00:37:07.722 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:37:07.722 slat (nsec): min=6170, max=49732, avg=11370.38, stdev=5407.11 00:37:07.722 clat (usec): min=197, max=720, avg=348.18, stdev=77.72 00:37:07.722 lat (usec): min=207, max=738, avg=359.55, stdev=79.19 00:37:07.722 clat percentiles (usec): 00:37:07.722 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 289], 00:37:07.722 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 347], 00:37:07.722 | 70.00th=[ 388], 80.00th=[ 424], 90.00th=[ 461], 95.00th=[ 486], 00:37:07.722 | 99.00th=[ 529], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 717], 00:37:07.722 | 99.99th=[ 717] 00:37:07.722 write: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec); 0 zone resets 00:37:07.722 slat (nsec): min=7271, max=55660, avg=11532.62, stdev=5282.33 00:37:07.722 clat (usec): min=155, max=649, avg=216.98, stdev=43.81 00:37:07.722 lat (usec): min=170, max=660, avg=228.52, stdev=45.62 00:37:07.722 clat percentiles (usec): 00:37:07.722 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 186], 00:37:07.722 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 215], 00:37:07.722 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 293], 00:37:07.722 | 99.00th=[ 420], 99.50th=[ 457], 99.90th=[ 490], 99.95th=[ 652], 00:37:07.722 | 99.99th=[ 652] 00:37:07.722 bw ( KiB/s): min= 8192, max= 8192, per=43.48%, avg=8192.00, stdev= 0.00, samples=1 00:37:07.722 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:07.722 lat (usec) : 250=49.96%, 500=48.39%, 750=1.65% 00:37:07.722 cpu : usr=2.00%, sys=4.30%, ctx=3447, majf=0, minf=1 00:37:07.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.722 issued rwts: total=1536,1911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.722 00:37:07.722 Run status group 0 (all jobs): 00:37:07.722 READ: bw=13.3MiB/s (13.9MB/s), 590KiB/s-6144KiB/s (604kB/s-6291kB/s), io=13.6MiB (14.3MB), run=1000-1025msec 00:37:07.722 WRITE: bw=18.4MiB/s (19.3MB/s), 1998KiB/s-7636KiB/s (2046kB/s-7820kB/s), io=18.9MiB (19.8MB), run=1000-1025msec 00:37:07.722 00:37:07.722 Disk stats (read/write): 00:37:07.722 nvme0n1: ios=195/512, merge=0/0, ticks=846/123, in_queue=969, util=98.80% 00:37:07.722 nvme0n2: ios=272/512, merge=0/0, ticks=699/120, in_queue=819, util=86.75% 00:37:07.722 nvme0n3: ios=1367/1536, merge=0/0, ticks=816/324, in_queue=1140, util=97.91% 00:37:07.722 nvme0n4: ios=1333/1536, merge=0/0, ticks=449/335, in_queue=784, util=89.56% 00:37:07.722 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:07.722 [global] 00:37:07.722 thread=1 00:37:07.722 invalidate=1 00:37:07.722 rw=randwrite 00:37:07.722 time_based=1 00:37:07.722 runtime=1 00:37:07.722 ioengine=libaio 00:37:07.722 direct=1 00:37:07.722 bs=4096 00:37:07.722 iodepth=1 00:37:07.722 norandommap=0 00:37:07.722 numjobs=1 00:37:07.722 00:37:07.722 verify_dump=1 00:37:07.722 verify_backlog=512 00:37:07.722 verify_state_save=0 00:37:07.722 do_verify=1 00:37:07.722 verify=crc32c-intel 00:37:07.722 [job0] 00:37:07.722 filename=/dev/nvme0n1 00:37:07.722 [job1] 00:37:07.722 filename=/dev/nvme0n2 00:37:07.722 [job2] 00:37:07.722 filename=/dev/nvme0n3 00:37:07.722 [job3] 00:37:07.722 filename=/dev/nvme0n4 00:37:07.722 Could not set queue depth (nvme0n1) 00:37:07.722 Could not set queue depth (nvme0n2) 00:37:07.722 Could not set queue depth (nvme0n3) 00:37:07.722 Could not set queue depth (nvme0n4) 00:37:07.980 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.980 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.980 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.980 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.980 fio-3.35 00:37:07.980 Starting 4 threads 00:37:09.353 00:37:09.353 job0: (groupid=0, jobs=1): err= 0: pid=1715977: Mon Oct 7 09:58:03 2024 00:37:09.353 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:37:09.353 slat (nsec): min=12665, max=18826, avg=17301.96, stdev=1507.96 00:37:09.353 clat (usec): min=280, max=41981, avg=39275.28, stdev=8505.80 00:37:09.353 lat (usec): min=298, max=41999, avg=39292.59, stdev=8505.73 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:09.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:09.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:37:09.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:09.353 | 99.99th=[42206] 00:37:09.353 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:37:09.353 slat (nsec): min=13041, max=33708, avg=14549.53, stdev=2393.30 00:37:09.353 clat (usec): min=177, max=444, avg=207.49, stdev=20.07 00:37:09.353 lat (usec): min=191, max=458, avg=222.04, stdev=20.68 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 192], 00:37:09.353 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:37:09.353 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:37:09.353 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 445], 99.95th=[ 445], 00:37:09.353 | 99.99th=[ 445] 00:37:09.353 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:37:09.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:09.353 lat (usec) : 250=94.02%, 500=1.87% 00:37:09.353 lat (msec) : 50=4.11% 00:37:09.353 cpu : usr=0.69%, sys=0.79%, ctx=536, majf=0, minf=1 00:37:09.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.353 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.353 job1: (groupid=0, jobs=1): err= 0: pid=1715978: Mon Oct 7 09:58:03 2024 00:37:09.353 read: IOPS=920, BW=3680KiB/s (3769kB/s)(3684KiB/1001msec) 00:37:09.353 slat (nsec): min=5550, max=77060, avg=16105.87, stdev=8914.15 00:37:09.353 clat (usec): min=220, max=41268, avg=797.89, stdev=4301.26 00:37:09.353 lat (usec): min=227, max=41277, avg=813.99, stdev=4301.61 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 229], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 297], 00:37:09.353 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 334], 00:37:09.353 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 424], 00:37:09.353 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:09.353 | 99.99th=[41157] 00:37:09.353 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:37:09.353 slat (nsec): min=7754, max=40236, avg=10090.81, stdev=3168.42 00:37:09.353 clat (usec): min=155, max=866, avg=227.74, stdev=62.98 00:37:09.353 lat (usec): min=164, max=876, avg=237.83, stdev=63.07 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:37:09.353 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 221], 00:37:09.353 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:37:09.353 | 99.00th=[ 355], 99.50th=[ 420], 99.90th=[ 783], 99.95th=[ 865], 00:37:09.353 | 99.99th=[ 865] 00:37:09.353 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:37:09.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:09.353 lat (usec) : 250=34.70%, 500=64.47%, 750=0.10%, 1000=0.15% 00:37:09.353 lat (msec) : 50=0.57% 00:37:09.353 cpu : usr=1.40%, sys=2.50%, ctx=1946, majf=0, minf=2 00:37:09.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.353 issued rwts: total=921,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.353 job2: (groupid=0, jobs=1): err= 0: pid=1715979: Mon Oct 7 09:58:03 2024 00:37:09.353 read: IOPS=511, BW=2048KiB/s (2097kB/s)(2064KiB/1008msec) 00:37:09.353 slat (nsec): min=6574, max=59399, avg=11965.40, stdev=5853.63 00:37:09.353 clat (usec): min=230, max=41261, avg=1452.27, stdev=6849.93 00:37:09.353 lat (usec): min=237, max=41270, avg=1464.24, stdev=6851.12 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 253], 00:37:09.353 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:37:09.353 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 343], 00:37:09.353 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:09.353 | 99.99th=[41157] 00:37:09.353 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:37:09.353 slat (nsec): min=8324, max=52085, avg=11900.72, stdev=5125.35 00:37:09.353 clat (usec): min=159, max=882, avg=229.69, stdev=60.74 00:37:09.353 lat (usec): min=174, max=893, avg=241.59, stdev=60.24 00:37:09.353 clat percentiles (usec): 00:37:09.353 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:37:09.353 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 210], 60.00th=[ 237], 00:37:09.353 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:37:09.353 | 99.00th=[ 338], 99.50th=[ 396], 99.90th=[ 832], 99.95th=[ 881], 00:37:09.353 | 99.99th=[ 881] 00:37:09.353 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=2 00:37:09.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:37:09.353 lat (usec) : 250=43.44%, 500=55.32%, 750=0.13%, 1000=0.13% 00:37:09.353 lat (msec) : 50=0.97% 00:37:09.353 cpu : usr=0.70%, sys=1.89%, ctx=1543, majf=0, minf=1 00:37:09.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.354 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.354 job3: (groupid=0, jobs=1): err= 0: pid=1715980: Mon Oct 7 09:58:03 2024 00:37:09.354 read: IOPS=519, BW=2077KiB/s (2127kB/s)(2104KiB/1013msec) 00:37:09.354 slat (nsec): min=6139, max=43771, avg=13362.59, stdev=6047.11 00:37:09.354 clat (usec): min=216, max=42003, avg=1514.86, stdev=7001.82 00:37:09.354 lat (usec): min=223, max=42019, avg=1528.23, stdev=7002.55 00:37:09.354 clat percentiles (usec): 00:37:09.354 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 235], 00:37:09.354 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 281], 60.00th=[ 289], 00:37:09.354 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 371], 00:37:09.354 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:37:09.354 | 99.99th=[42206] 00:37:09.354 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:37:09.354 slat (nsec): min=8275, max=48050, avg=12114.67, stdev=4008.39 00:37:09.354 clat (usec): min=152, max=581, avg=186.71, stdev=31.87 00:37:09.354 lat (usec): min=161, max=594, avg=198.82, stdev=33.74 00:37:09.354 clat percentiles (usec): 00:37:09.354 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 159], 00:37:09.354 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 192], 00:37:09.354 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 233], 00:37:09.354 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 537], 99.95th=[ 586], 00:37:09.354 | 99.99th=[ 586] 00:37:09.354 bw ( KiB/s): min= 8192, max= 8192, per=58.29%, avg=8192.00, stdev= 0.00, samples=1 00:37:09.354 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:09.354 lat (usec) : 250=78.97%, 500=19.87%, 750=0.13% 00:37:09.354 lat (msec) : 50=1.03% 00:37:09.354 cpu : usr=1.09%, sys=1.78%, ctx=1551, majf=0, minf=2 00:37:09.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.354 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.354 00:37:09.354 Run status group 0 (all jobs): 00:37:09.354 READ: bw=7788KiB/s (7975kB/s), 90.2KiB/s-3680KiB/s (92.4kB/s-3769kB/s), io=7944KiB (8135kB), run=1001-1020msec 00:37:09.354 WRITE: bw=13.7MiB/s (14.4MB/s), 2008KiB/s-4092KiB/s (2056kB/s-4190kB/s), io=14.0MiB (14.7MB), run=1001-1020msec 00:37:09.354 00:37:09.354 Disk stats (read/write): 00:37:09.354 nvme0n1: ios=35/512, merge=0/0, ticks=915/106, in_queue=1021, util=96.69% 00:37:09.354 nvme0n2: ios=555/941, merge=0/0, ticks=940/216, in_queue=1156, util=100.00% 00:37:09.354 nvme0n3: ios=556/993, merge=0/0, ticks=928/217, in_queue=1145, util=100.00% 00:37:09.354 nvme0n4: ios=571/1024, merge=0/0, ticks=937/192, in_queue=1129, util=100.00% 00:37:09.354 09:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:09.354 [global] 00:37:09.354 thread=1 00:37:09.354 invalidate=1 00:37:09.354 rw=write 00:37:09.354 time_based=1 00:37:09.354 runtime=1 00:37:09.354 ioengine=libaio 00:37:09.354 direct=1 00:37:09.354 bs=4096 00:37:09.354 iodepth=128 00:37:09.354 norandommap=0 00:37:09.354 numjobs=1 00:37:09.354 00:37:09.354 verify_dump=1 00:37:09.354 verify_backlog=512 00:37:09.354 verify_state_save=0 00:37:09.354 do_verify=1 00:37:09.354 verify=crc32c-intel 00:37:09.354 [job0] 00:37:09.354 filename=/dev/nvme0n1 00:37:09.354 [job1] 00:37:09.354 filename=/dev/nvme0n2 00:37:09.354 [job2] 00:37:09.354 filename=/dev/nvme0n3 00:37:09.354 [job3] 00:37:09.354 filename=/dev/nvme0n4 00:37:09.354 Could not set queue depth (nvme0n1) 00:37:09.354 Could not set queue depth (nvme0n2) 00:37:09.354 Could not set queue depth (nvme0n3) 00:37:09.354 Could not set queue depth (nvme0n4) 00:37:09.354 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.354 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.354 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.354 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.354 fio-3.35 00:37:09.354 Starting 4 threads 00:37:10.728 00:37:10.728 job0: (groupid=0, jobs=1): err= 0: pid=1716208: Mon Oct 7 09:58:05 2024 00:37:10.728 read: IOPS=4022, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec) 00:37:10.728 slat (usec): min=2, max=20069, avg=123.84, stdev=892.39 00:37:10.728 clat (usec): min=1173, max=56594, avg=16650.57, stdev=11277.64 00:37:10.728 lat (usec): min=1291, max=56611, avg=16774.41, stdev=11344.59 00:37:10.728 clat percentiles (usec): 00:37:10.728 | 1.00th=[ 3359], 5.00th=[ 6390], 10.00th=[ 8717], 20.00th=[10290], 00:37:10.728 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12649], 60.00th=[13304], 00:37:10.728 | 70.00th=[14746], 80.00th=[20055], 90.00th=[38536], 95.00th=[43779], 00:37:10.728 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:37:10.728 | 99.99th=[56361] 00:37:10.728 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:37:10.728 slat (usec): min=3, max=15539, avg=111.16, stdev=682.62 00:37:10.728 clat (usec): min=5452, max=41271, avg=14358.70, stdev=5554.31 00:37:10.728 lat (usec): min=5459, max=41278, avg=14469.86, stdev=5601.61 00:37:10.728 clat percentiles (usec): 00:37:10.728 | 1.00th=[ 5932], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11076], 00:37:10.728 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:37:10.728 | 70.00th=[13698], 80.00th=[14484], 90.00th=[24511], 95.00th=[27132], 00:37:10.728 | 99.00th=[34866], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:37:10.728 | 99.99th=[41157] 00:37:10.728 bw ( KiB/s): min=14192, max=18576, per=25.58%, avg=16384.00, stdev=3099.96, samples=2 00:37:10.728 iops : min= 3548, max= 4644, avg=4096.00, stdev=774.99, samples=2 00:37:10.728 lat (msec) : 2=0.23%, 4=0.97%, 10=11.65%, 20=70.93%, 50=15.17% 00:37:10.728 lat (msec) : 100=1.04% 00:37:10.728 cpu : usr=3.29%, sys=5.38%, ctx=339, majf=0, minf=1 00:37:10.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:10.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.728 issued rwts: total=4043,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.728 job1: (groupid=0, jobs=1): err= 0: pid=1716209: Mon Oct 7 09:58:05 2024 00:37:10.728 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:37:10.728 slat (usec): min=3, max=24762, avg=132.37, stdev=1057.81 00:37:10.728 clat (usec): min=4228, max=88086, avg=17157.29, stdev=14517.87 00:37:10.728 lat (usec): min=4238, max=88094, avg=17289.66, stdev=14605.77 00:37:10.728 clat percentiles (usec): 00:37:10.728 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10945], 00:37:10.728 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12780], 00:37:10.728 | 70.00th=[14615], 80.00th=[17433], 90.00th=[23200], 95.00th=[47973], 00:37:10.728 | 99.00th=[85459], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:37:10.728 | 99.99th=[88605] 00:37:10.728 write: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1010msec); 0 zone resets 00:37:10.728 slat (usec): min=4, max=10562, avg=96.78, stdev=565.10 00:37:10.728 clat (usec): min=1841, max=51841, avg=12962.50, stdev=5873.61 00:37:10.728 lat (usec): min=1851, max=51850, avg=13059.28, stdev=5907.90 00:37:10.728 clat percentiles (usec): 00:37:10.728 | 1.00th=[ 4948], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[10552], 00:37:10.728 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:37:10.728 | 70.00th=[12387], 80.00th=[13042], 90.00th=[18220], 95.00th=[25297], 00:37:10.728 | 99.00th=[40109], 99.50th=[43779], 99.90th=[51643], 99.95th=[51643], 00:37:10.728 | 99.99th=[51643] 00:37:10.728 bw ( KiB/s): min=14000, max=20480, per=26.92%, avg=17240.00, stdev=4582.05, samples=2 00:37:10.728 iops : min= 3500, max= 5120, avg=4310.00, stdev=1145.51, samples=2 00:37:10.729 lat (msec) : 2=0.07%, 4=0.19%, 10=10.73%, 20=78.42%, 50=8.26% 00:37:10.729 lat (msec) : 100=2.33% 00:37:10.729 cpu : usr=2.87%, sys=6.64%, ctx=411, majf=0, minf=1 00:37:10.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:10.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.729 issued rwts: total=4096,4438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.729 job2: (groupid=0, jobs=1): err= 0: pid=1716210: Mon Oct 7 09:58:05 2024 00:37:10.729 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:37:10.729 slat (usec): min=3, max=12961, avg=122.49, stdev=799.42 00:37:10.729 clat (usec): min=4274, max=56217, avg=16891.59, stdev=6369.69 00:37:10.729 lat (usec): min=4284, max=56830, avg=17014.08, stdev=6386.42 00:37:10.729 clat percentiles (usec): 00:37:10.729 | 1.00th=[ 6194], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[12256], 00:37:10.729 | 30.00th=[13173], 40.00th=[13960], 50.00th=[15139], 60.00th=[17957], 00:37:10.729 | 70.00th=[20317], 80.00th=[21627], 90.00th=[23200], 95.00th=[24249], 00:37:10.729 | 99.00th=[41681], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:37:10.729 | 99.99th=[56361] 00:37:10.729 write: IOPS=3597, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1002msec); 0 zone resets 00:37:10.729 slat (usec): min=4, max=24795, avg=146.15, stdev=1111.88 00:37:10.729 clat (usec): min=827, max=69455, avg=18319.23, stdev=10952.42 00:37:10.729 lat (usec): min=2525, max=69466, avg=18465.38, stdev=11036.42 00:37:10.729 clat percentiles (usec): 00:37:10.729 | 1.00th=[ 7898], 5.00th=[10421], 10.00th=[11207], 20.00th=[12780], 00:37:10.729 | 30.00th=[13698], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:37:10.729 | 70.00th=[17171], 80.00th=[19792], 90.00th=[33817], 95.00th=[47973], 00:37:10.729 | 99.00th=[65799], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:37:10.729 | 99.99th=[69731] 00:37:10.729 bw ( KiB/s): min=11608, max=17064, per=22.39%, avg=14336.00, stdev=3857.97, samples=2 00:37:10.729 iops : min= 2902, max= 4266, avg=3584.00, stdev=964.49, samples=2 00:37:10.729 lat (usec) : 1000=0.01% 00:37:10.729 lat (msec) : 4=0.28%, 10=6.41%, 20=67.95%, 50=22.73%, 100=2.62% 00:37:10.729 cpu : usr=4.10%, sys=5.49%, ctx=255, majf=0, minf=1 00:37:10.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:10.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.729 issued rwts: total=3584,3605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.729 job3: (groupid=0, jobs=1): err= 0: pid=1716212: Mon Oct 7 09:58:05 2024 00:37:10.729 read: IOPS=4061, BW=15.9MiB/s (16.6MB/s)(16.6MiB/1046msec) 00:37:10.729 slat (usec): min=2, max=38371, avg=116.12, stdev=911.09 00:37:10.729 clat (usec): min=6391, max=60419, avg=15375.33, stdev=7964.84 00:37:10.729 lat (usec): min=6399, max=65171, avg=15491.45, stdev=8012.22 00:37:10.729 clat percentiles (usec): 00:37:10.729 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11338], 00:37:10.729 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13698], 60.00th=[15008], 00:37:10.729 | 70.00th=[15664], 80.00th=[17433], 90.00th=[20055], 95.00th=[23200], 00:37:10.729 | 99.00th=[54264], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:37:10.729 | 99.99th=[60556] 00:37:10.729 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1046msec); 0 zone resets 00:37:10.729 slat (usec): min=4, max=15089, avg=102.79, stdev=650.13 00:37:10.729 clat (usec): min=5532, max=69766, avg=14533.47, stdev=7997.57 00:37:10.729 lat (usec): min=5542, max=69775, avg=14636.27, stdev=8022.06 00:37:10.729 clat percentiles (usec): 00:37:10.729 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[10945], 00:37:10.729 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:37:10.729 | 70.00th=[14615], 80.00th=[15008], 90.00th=[17171], 95.00th=[21627], 00:37:10.729 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:37:10.729 | 99.99th=[69731] 00:37:10.729 bw ( KiB/s): min=17648, max=19216, per=28.78%, avg=18432.00, stdev=1108.74, samples=2 00:37:10.729 iops : min= 4412, max= 4804, avg=4608.00, stdev=277.19, samples=2 00:37:10.729 lat (msec) : 10=9.39%, 20=83.12%, 50=5.45%, 100=2.03% 00:37:10.729 cpu : usr=3.83%, sys=5.74%, ctx=407, majf=0, minf=1 00:37:10.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:10.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.729 issued rwts: total=4248,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.729 00:37:10.729 Run status group 0 (all jobs): 00:37:10.729 READ: bw=59.6MiB/s (62.5MB/s), 14.0MiB/s-15.9MiB/s (14.7MB/s-16.6MB/s), io=62.4MiB (65.4MB), run=1002-1046msec 00:37:10.729 WRITE: bw=62.5MiB/s (65.6MB/s), 14.1MiB/s-17.2MiB/s (14.7MB/s-18.0MB/s), io=65.4MiB (68.6MB), run=1002-1046msec 00:37:10.729 00:37:10.729 Disk stats (read/write): 00:37:10.729 nvme0n1: ios=3122/3423, merge=0/0, ticks=28731/26190, in_queue=54921, util=86.97% 00:37:10.729 nvme0n2: ios=3537/3584, merge=0/0, ticks=30515/26631, in_queue=57146, util=96.24% 00:37:10.729 nvme0n3: ios=2910/3072, merge=0/0, ticks=38345/38761, in_queue=77106, util=97.91% 00:37:10.729 nvme0n4: ios=3598/3939, merge=0/0, ticks=38637/37299, in_queue=75936, util=90.50% 00:37:10.729 09:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:10.729 [global] 00:37:10.729 thread=1 00:37:10.729 invalidate=1 00:37:10.729 rw=randwrite 00:37:10.729 time_based=1 00:37:10.729 runtime=1 00:37:10.729 ioengine=libaio 00:37:10.729 direct=1 00:37:10.729 bs=4096 00:37:10.729 iodepth=128 00:37:10.729 norandommap=0 00:37:10.729 numjobs=1 00:37:10.729 00:37:10.729 verify_dump=1 00:37:10.729 verify_backlog=512 00:37:10.729 verify_state_save=0 00:37:10.729 do_verify=1 00:37:10.729 verify=crc32c-intel 00:37:10.729 [job0] 00:37:10.729 filename=/dev/nvme0n1 00:37:10.729 [job1] 00:37:10.729 filename=/dev/nvme0n2 00:37:10.729 [job2] 00:37:10.729 filename=/dev/nvme0n3 00:37:10.729 [job3] 00:37:10.729 filename=/dev/nvme0n4 00:37:10.729 Could not set queue depth (nvme0n1) 00:37:10.729 Could not set queue depth (nvme0n2) 00:37:10.729 Could not set queue depth (nvme0n3) 00:37:10.729 Could not set queue depth (nvme0n4) 00:37:10.988 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:10.988 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:10.988 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:10.988 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:10.988 fio-3.35 00:37:10.988 Starting 4 threads 00:37:12.363 00:37:12.363 job0: (groupid=0, jobs=1): err= 0: pid=1716439: Mon Oct 7 09:58:06 2024 00:37:12.363 read: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1002msec) 00:37:12.363 slat (usec): min=4, max=7761, avg=132.87, stdev=680.28 00:37:12.363 clat (usec): min=823, max=29981, avg=17053.20, stdev=5639.00 00:37:12.363 lat (usec): min=3003, max=29990, avg=17186.07, stdev=5653.76 00:37:12.363 clat percentiles (usec): 00:37:12.363 | 1.00th=[ 5800], 5.00th=[10028], 10.00th=[10683], 20.00th=[11076], 00:37:12.363 | 30.00th=[11731], 40.00th=[12649], 50.00th=[17957], 60.00th=[20841], 00:37:12.363 | 70.00th=[21627], 80.00th=[22152], 90.00th=[23200], 95.00th=[24249], 00:37:12.363 | 99.00th=[29230], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:37:12.363 | 99.99th=[30016] 00:37:12.363 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:37:12.363 slat (usec): min=5, max=11295, avg=139.17, stdev=755.82 00:37:12.363 clat (usec): min=8783, max=33462, avg=18280.65, stdev=6135.15 00:37:12.363 lat (usec): min=8863, max=33470, avg=18419.82, stdev=6140.50 00:37:12.363 clat percentiles (usec): 00:37:12.363 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11076], 20.00th=[11338], 00:37:12.363 | 30.00th=[12125], 40.00th=[17433], 50.00th=[18744], 60.00th=[20317], 00:37:12.363 | 70.00th=[21103], 80.00th=[22676], 90.00th=[26608], 95.00th=[30016], 00:37:12.363 | 99.00th=[33162], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:37:12.363 | 99.99th=[33424] 00:37:12.363 bw ( KiB/s): min=12288, max=12288, per=19.10%, avg=12288.00, stdev= 0.00, samples=1 00:37:12.363 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:37:12.363 lat (usec) : 1000=0.01% 00:37:12.363 lat (msec) : 4=0.45%, 10=2.46%, 20=53.50%, 50=43.58% 00:37:12.363 cpu : usr=3.60%, sys=6.29%, ctx=387, majf=0, minf=1 00:37:12.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:12.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.363 issued rwts: total=3536,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.363 job1: (groupid=0, jobs=1): err= 0: pid=1716440: Mon Oct 7 09:58:06 2024 00:37:12.363 read: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1005msec) 00:37:12.363 slat (usec): min=2, max=14234, avg=141.45, stdev=836.61 00:37:12.363 clat (usec): min=2701, max=49341, avg=18467.31, stdev=7103.75 00:37:12.363 lat (usec): min=6637, max=49345, avg=18608.76, stdev=7126.02 00:37:12.363 clat percentiles (usec): 00:37:12.363 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[11600], 00:37:12.363 | 30.00th=[14353], 40.00th=[17957], 50.00th=[18482], 60.00th=[20055], 00:37:12.363 | 70.00th=[20841], 80.00th=[21890], 90.00th=[27395], 95.00th=[33162], 00:37:12.363 | 99.00th=[40633], 99.50th=[41157], 99.90th=[49546], 99.95th=[49546], 00:37:12.363 | 99.99th=[49546] 00:37:12.363 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:37:12.363 slat (usec): min=3, max=27599, avg=141.68, stdev=1061.32 00:37:12.363 clat (usec): min=454, max=57150, avg=18825.36, stdev=8319.52 00:37:12.363 lat (usec): min=478, max=57170, avg=18967.04, stdev=8394.99 00:37:12.363 clat percentiles (usec): 00:37:12.363 | 1.00th=[ 6128], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[10814], 00:37:12.363 | 30.00th=[14615], 40.00th=[16319], 50.00th=[18744], 60.00th=[19792], 00:37:12.363 | 70.00th=[21103], 80.00th=[22152], 90.00th=[28705], 95.00th=[39060], 00:37:12.363 | 99.00th=[45876], 99.50th=[45876], 99.90th=[54264], 99.95th=[55837], 00:37:12.363 | 99.99th=[57410] 00:37:12.363 bw ( KiB/s): min=12288, max=16384, per=22.28%, avg=14336.00, stdev=2896.31, samples=2 00:37:12.363 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:37:12.363 lat (usec) : 500=0.03% 00:37:12.363 lat (msec) : 2=0.03%, 4=0.32%, 10=9.22%, 20=50.90%, 50=39.43% 00:37:12.363 lat (msec) : 100=0.07% 00:37:12.363 cpu : usr=1.89%, sys=6.37%, ctx=210, majf=0, minf=1 00:37:12.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:12.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.363 issued rwts: total=3226,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.363 job2: (groupid=0, jobs=1): err= 0: pid=1716441: Mon Oct 7 09:58:06 2024 00:37:12.363 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:37:12.363 slat (usec): min=4, max=11517, avg=121.69, stdev=819.92 00:37:12.363 clat (usec): min=5332, max=40246, avg=15695.36, stdev=6586.07 00:37:12.363 lat (usec): min=5339, max=40254, avg=15817.05, stdev=6620.03 00:37:12.363 clat percentiles (usec): 00:37:12.363 | 1.00th=[ 6521], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10552], 00:37:12.363 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13829], 60.00th=[15139], 00:37:12.363 | 70.00th=[17171], 80.00th=[20317], 90.00th=[25560], 95.00th=[27657], 00:37:12.363 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:37:12.363 | 99.99th=[40109] 00:37:12.363 write: IOPS=3686, BW=14.4MiB/s (15.1MB/s)(14.6MiB/1013msec); 0 zone resets 00:37:12.363 slat (usec): min=5, max=14930, avg=140.60, stdev=920.63 00:37:12.364 clat (msec): min=3, max=110, avg=19.14, stdev=16.98 00:37:12.364 lat (msec): min=3, max=110, avg=19.28, stdev=17.08 00:37:12.364 clat percentiles (msec): 00:37:12.364 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:37:12.364 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 17], 00:37:12.364 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 29], 95.00th=[ 57], 00:37:12.364 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 111], 00:37:12.364 | 99.99th=[ 111] 00:37:12.364 bw ( KiB/s): min=12288, max=16568, per=22.43%, avg=14428.00, stdev=3026.42, samples=2 00:37:12.364 iops : min= 3072, max= 4142, avg=3607.00, stdev=756.60, samples=2 00:37:12.364 lat (msec) : 4=0.11%, 10=15.66%, 20=60.84%, 50=20.68%, 100=2.16% 00:37:12.364 lat (msec) : 250=0.56% 00:37:12.364 cpu : usr=3.56%, sys=7.51%, ctx=274, majf=0, minf=1 00:37:12.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:12.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.364 issued rwts: total=3584,3734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.364 job3: (groupid=0, jobs=1): err= 0: pid=1716442: Mon Oct 7 09:58:06 2024 00:37:12.364 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:37:12.364 slat (usec): min=4, max=11352, avg=101.37, stdev=717.46 00:37:12.364 clat (usec): min=3586, max=23522, avg=12878.93, stdev=3321.52 00:37:12.364 lat (usec): min=3596, max=23532, avg=12980.30, stdev=3355.79 00:37:12.364 clat percentiles (usec): 00:37:12.364 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10683], 00:37:12.364 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:37:12.364 | 70.00th=[13566], 80.00th=[15664], 90.00th=[17433], 95.00th=[19792], 00:37:12.364 | 99.00th=[23200], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:37:12.364 | 99.99th=[23462] 00:37:12.364 write: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(21.1MiB/1012msec); 0 zone resets 00:37:12.364 slat (usec): min=6, max=10032, avg=79.84, stdev=554.10 00:37:12.364 clat (usec): min=1030, max=23526, avg=11573.73, stdev=2825.92 00:37:12.364 lat (usec): min=1043, max=23537, avg=11653.58, stdev=2858.79 00:37:12.364 clat percentiles (usec): 00:37:12.364 | 1.00th=[ 4146], 5.00th=[ 7111], 10.00th=[ 8356], 20.00th=[ 9372], 00:37:12.364 | 30.00th=[10290], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:37:12.364 | 70.00th=[12649], 80.00th=[12780], 90.00th=[15139], 95.00th=[16909], 00:37:12.364 | 99.00th=[19530], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:37:12.364 | 99.99th=[23462] 00:37:12.364 bw ( KiB/s): min=20552, max=21552, per=32.72%, avg=21052.00, stdev=707.11, samples=2 00:37:12.364 iops : min= 5138, max= 5388, avg=5263.00, stdev=176.78, samples=2 00:37:12.364 lat (msec) : 2=0.10%, 4=0.41%, 10=20.28%, 20=76.33%, 50=2.89% 00:37:12.364 cpu : usr=5.04%, sys=9.69%, ctx=429, majf=0, minf=1 00:37:12.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:12.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.364 issued rwts: total=5120,5390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.364 00:37:12.364 Run status group 0 (all jobs): 00:37:12.364 READ: bw=59.6MiB/s (62.5MB/s), 12.5MiB/s-19.8MiB/s (13.1MB/s-20.7MB/s), io=60.4MiB (63.3MB), run=1002-1013msec 00:37:12.364 WRITE: bw=62.8MiB/s (65.9MB/s), 13.9MiB/s-20.8MiB/s (14.6MB/s-21.8MB/s), io=63.6MiB (66.7MB), run=1002-1013msec 00:37:12.364 00:37:12.364 Disk stats (read/write): 00:37:12.364 nvme0n1: ios=2514/2560, merge=0/0, ticks=11905/12730, in_queue=24635, util=83.77% 00:37:12.364 nvme0n2: ios=2896/3072, merge=0/0, ticks=19832/16795, in_queue=36627, util=83.57% 00:37:12.364 nvme0n3: ios=2801/3072, merge=0/0, ticks=35546/49451, in_queue=84997, util=98.49% 00:37:12.364 nvme0n4: ios=4138/4383, merge=0/0, ticks=51124/48392, in_queue=99516, util=99.67% 00:37:12.364 09:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:12.364 09:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1716580 00:37:12.364 09:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:12.364 09:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:12.364 [global] 00:37:12.364 thread=1 00:37:12.364 invalidate=1 00:37:12.364 rw=read 00:37:12.364 time_based=1 00:37:12.364 runtime=10 00:37:12.364 ioengine=libaio 00:37:12.364 direct=1 00:37:12.364 bs=4096 00:37:12.364 iodepth=1 00:37:12.364 norandommap=1 00:37:12.364 numjobs=1 00:37:12.364 00:37:12.364 [job0] 00:37:12.364 filename=/dev/nvme0n1 00:37:12.364 [job1] 00:37:12.364 filename=/dev/nvme0n2 00:37:12.364 [job2] 00:37:12.364 filename=/dev/nvme0n3 00:37:12.364 [job3] 00:37:12.364 filename=/dev/nvme0n4 00:37:12.364 Could not set queue depth (nvme0n1) 00:37:12.364 Could not set queue depth (nvme0n2) 00:37:12.364 Could not set queue depth (nvme0n3) 00:37:12.364 Could not set queue depth (nvme0n4) 00:37:12.364 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.364 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.364 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.364 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.364 fio-3.35 00:37:12.364 Starting 4 threads 00:37:15.646 09:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:15.646 09:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:15.646 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8859648, buflen=4096 00:37:15.646 fio: pid=1716793, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:15.903 09:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:15.903 09:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:15.903 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9027584, buflen=4096 00:37:15.903 fio: pid=1716792, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:16.467 09:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.468 09:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:16.468 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=716800, buflen=4096 00:37:16.468 fio: pid=1716753, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:16.727 09:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.727 09:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:16.727 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=7229440, buflen=4096 00:37:16.727 fio: pid=1716770, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:37:16.727 00:37:16.727 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1716753: Mon Oct 7 09:58:11 2024 00:37:16.727 read: IOPS=46, BW=186KiB/s (190kB/s)(700KiB/3770msec) 00:37:16.727 slat (usec): min=5, max=16886, avg=190.09, stdev=1641.38 00:37:16.727 clat (usec): min=215, max=41951, avg=21200.80, stdev=20413.99 00:37:16.727 lat (usec): min=222, max=58012, avg=21391.87, stdev=20656.41 00:37:16.727 clat percentiles (usec): 00:37:16.727 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 235], 00:37:16.727 | 30.00th=[ 255], 40.00th=[ 277], 50.00th=[40633], 60.00th=[41157], 00:37:16.727 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:16.727 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.727 | 99.99th=[42206] 00:37:16.727 bw ( KiB/s): min= 96, max= 736, per=3.16%, avg=192.57, stdev=239.74, samples=7 00:37:16.727 iops : min= 24, max= 184, avg=48.14, stdev=59.93, samples=7 00:37:16.727 lat (usec) : 250=28.98%, 500=18.75%, 750=0.57% 00:37:16.727 lat (msec) : 50=51.14% 00:37:16.727 cpu : usr=0.11%, sys=0.00%, ctx=180, majf=0, minf=1 00:37:16.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 issued rwts: total=176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.727 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1716770: Mon Oct 7 09:58:11 2024 00:37:16.727 read: IOPS=424, BW=1698KiB/s (1739kB/s)(7060KiB/4157msec) 00:37:16.727 slat (usec): min=5, max=27863, avg=43.38, stdev=786.45 00:37:16.727 clat (usec): min=202, max=42044, avg=2308.80, stdev=8921.06 00:37:16.727 lat (usec): min=210, max=68949, avg=2347.73, stdev=9073.85 00:37:16.727 clat percentiles (usec): 00:37:16.727 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:37:16.727 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 265], 00:37:16.727 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[40633], 00:37:16.727 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:37:16.727 | 99.99th=[42206] 00:37:16.727 bw ( KiB/s): min= 88, max= 9224, per=29.00%, avg=1760.62, stdev=3202.97, samples=8 00:37:16.727 iops : min= 22, max= 2306, avg=440.12, stdev=800.76, samples=8 00:37:16.727 lat (usec) : 250=57.25%, 500=37.54%, 750=0.11% 00:37:16.727 lat (msec) : 50=5.04% 00:37:16.727 cpu : usr=0.31%, sys=0.60%, ctx=1769, majf=0, minf=1 00:37:16.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.727 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1716792: Mon Oct 7 09:58:11 2024 00:37:16.727 read: IOPS=656, BW=2625KiB/s (2688kB/s)(8816KiB/3358msec) 00:37:16.727 slat (usec): min=6, max=16676, avg=21.09, stdev=386.66 00:37:16.727 clat (usec): min=211, max=41483, avg=1486.78, stdev=6923.91 00:37:16.727 lat (usec): min=224, max=41499, avg=1507.87, stdev=6934.61 00:37:16.727 clat percentiles (usec): 00:37:16.727 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 243], 00:37:16.727 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:37:16.727 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 379], 00:37:16.727 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:16.727 | 99.99th=[41681] 00:37:16.727 bw ( KiB/s): min= 96, max= 7824, per=23.58%, avg=1432.00, stdev=3131.85, samples=6 00:37:16.727 iops : min= 24, max= 1956, avg=358.00, stdev=782.96, samples=6 00:37:16.727 lat (usec) : 250=28.34%, 500=67.76%, 750=0.82% 00:37:16.727 lat (msec) : 2=0.05%, 50=2.99% 00:37:16.727 cpu : usr=0.45%, sys=0.92%, ctx=2207, majf=0, minf=1 00:37:16.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.727 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1716793: Mon Oct 7 09:58:11 2024 00:37:16.727 read: IOPS=725, BW=2900KiB/s (2970kB/s)(8652KiB/2983msec) 00:37:16.727 slat (nsec): min=7981, max=58844, avg=13639.86, stdev=6747.78 00:37:16.727 clat (usec): min=236, max=41529, avg=1348.23, stdev=6285.26 00:37:16.727 lat (usec): min=244, max=41562, avg=1361.87, stdev=6286.94 00:37:16.727 clat percentiles (usec): 00:37:16.727 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 277], 00:37:16.727 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 351], 00:37:16.727 | 70.00th=[ 383], 80.00th=[ 457], 90.00th=[ 515], 95.00th=[ 570], 00:37:16.727 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:16.727 | 99.99th=[41681] 00:37:16.727 bw ( KiB/s): min= 152, max=10480, per=38.54%, avg=2339.20, stdev=4554.21, samples=5 00:37:16.727 iops : min= 38, max= 2620, avg=584.80, stdev=1138.55, samples=5 00:37:16.727 lat (usec) : 250=4.16%, 500=84.38%, 750=8.92%, 1000=0.05% 00:37:16.727 lat (msec) : 50=2.45% 00:37:16.727 cpu : usr=0.54%, sys=1.61%, ctx=2164, majf=0, minf=1 00:37:16.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.727 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.727 00:37:16.727 Run status group 0 (all jobs): 00:37:16.727 READ: bw=6069KiB/s (6214kB/s), 186KiB/s-2900KiB/s (190kB/s-2970kB/s), io=24.6MiB (25.8MB), run=2983-4157msec 00:37:16.727 00:37:16.727 Disk stats (read/write): 00:37:16.727 nvme0n1: ios=211/0, merge=0/0, ticks=3771/0, in_queue=3771, util=98.83% 00:37:16.727 nvme0n2: ios=1763/0, merge=0/0, ticks=3989/0, in_queue=3989, util=95.84% 00:37:16.727 nvme0n3: ios=2205/0, merge=0/0, ticks=3271/0, in_queue=3271, util=95.98% 00:37:16.727 nvme0n4: ios=1826/0, merge=0/0, ticks=2799/0, in_queue=2799, util=96.74% 00:37:17.294 09:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:17.294 09:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:18.229 09:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:18.229 09:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:18.487 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:18.487 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:18.745 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:18.745 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1716580 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:19.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:19.312 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:19.312 nvmf hotplug test: fio failed as expected 00:37:19.312 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:19.878 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:19.878 rmmod nvme_tcp 00:37:19.878 rmmod nvme_fabrics 00:37:19.878 rmmod nvme_keyring 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1714543 ']' 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1714543 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1714543 ']' 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1714543 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714543 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714543' 00:37:20.136 killing process with pid 1714543 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1714543 00:37:20.136 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1714543 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.395 09:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.925 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.925 00:37:22.925 real 0m28.330s 00:37:22.925 user 1m20.163s 00:37:22.925 sys 0m11.867s 00:37:22.925 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.925 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:22.925 ************************************ 00:37:22.925 END TEST nvmf_fio_target 00:37:22.925 ************************************ 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.926 ************************************ 00:37:22.926 START TEST nvmf_bdevio 00:37:22.926 ************************************ 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:22.926 * Looking for test storage... 00:37:22.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.926 --rc genhtml_branch_coverage=1 00:37:22.926 --rc genhtml_function_coverage=1 00:37:22.926 --rc genhtml_legend=1 00:37:22.926 --rc geninfo_all_blocks=1 00:37:22.926 --rc geninfo_unexecuted_blocks=1 00:37:22.926 00:37:22.926 ' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.926 --rc genhtml_branch_coverage=1 00:37:22.926 --rc genhtml_function_coverage=1 00:37:22.926 --rc genhtml_legend=1 00:37:22.926 --rc geninfo_all_blocks=1 00:37:22.926 --rc geninfo_unexecuted_blocks=1 00:37:22.926 00:37:22.926 ' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.926 --rc genhtml_branch_coverage=1 00:37:22.926 --rc genhtml_function_coverage=1 00:37:22.926 --rc genhtml_legend=1 00:37:22.926 --rc geninfo_all_blocks=1 00:37:22.926 --rc geninfo_unexecuted_blocks=1 00:37:22.926 00:37:22.926 ' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.926 --rc genhtml_branch_coverage=1 00:37:22.926 --rc genhtml_function_coverage=1 00:37:22.926 --rc genhtml_legend=1 00:37:22.926 --rc geninfo_all_blocks=1 00:37:22.926 --rc geninfo_unexecuted_blocks=1 00:37:22.926 00:37:22.926 ' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.926 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.927 09:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:25.458 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:25.459 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:25.459 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:25.459 Found net devices under 0000:84:00.0: cvl_0_0 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:25.459 Found net devices under 0000:84:00.1: cvl_0_1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:25.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:25.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:37:25.459 00:37:25.459 --- 10.0.0.2 ping statistics --- 00:37:25.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.459 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:25.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:25.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:37:25.459 00:37:25.459 --- 10.0.0.1 ping statistics --- 00:37:25.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.459 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1719687 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1719687 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1719687 ']' 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:25.459 09:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:25.718 [2024-10-07 09:58:20.317976] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:25.718 [2024-10-07 09:58:20.320062] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:37:25.718 [2024-10-07 09:58:20.320163] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:25.718 [2024-10-07 09:58:20.419496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:25.977 [2024-10-07 09:58:20.535305] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:25.977 [2024-10-07 09:58:20.535374] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:25.977 [2024-10-07 09:58:20.535403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:25.977 [2024-10-07 09:58:20.535415] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:25.977 [2024-10-07 09:58:20.535426] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:25.977 [2024-10-07 09:58:20.537424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:37:25.977 [2024-10-07 09:58:20.537553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:37:25.977 [2024-10-07 09:58:20.537618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:37:25.977 [2024-10-07 09:58:20.537622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:25.977 [2024-10-07 09:58:20.648182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:25.977 [2024-10-07 09:58:20.648422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:25.977 [2024-10-07 09:58:20.648720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:25.977 [2024-10-07 09:58:20.649419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:25.977 [2024-10-07 09:58:20.649648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:26.911 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:26.911 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:37:26.911 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:26.911 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:26.911 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 [2024-10-07 09:58:21.750394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 Malloc0 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.169 [2024-10-07 09:58:21.810566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:27.169 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:27.169 { 00:37:27.169 "params": { 00:37:27.169 "name": "Nvme$subsystem", 00:37:27.169 "trtype": "$TEST_TRANSPORT", 00:37:27.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.169 "adrfam": "ipv4", 00:37:27.169 "trsvcid": "$NVMF_PORT", 00:37:27.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.169 "hdgst": ${hdgst:-false}, 00:37:27.169 "ddgst": ${ddgst:-false} 00:37:27.169 }, 00:37:27.170 "method": "bdev_nvme_attach_controller" 00:37:27.170 } 00:37:27.170 EOF 00:37:27.170 )") 00:37:27.170 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:37:27.170 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:37:27.170 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:37:27.170 09:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:27.170 "params": { 00:37:27.170 "name": "Nvme1", 00:37:27.170 "trtype": "tcp", 00:37:27.170 "traddr": "10.0.0.2", 00:37:27.170 "adrfam": "ipv4", 00:37:27.170 "trsvcid": "4420", 00:37:27.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:27.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:27.170 "hdgst": false, 00:37:27.170 "ddgst": false 00:37:27.170 }, 00:37:27.170 "method": "bdev_nvme_attach_controller" 00:37:27.170 }' 00:37:27.170 [2024-10-07 09:58:21.894743] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:37:27.170 [2024-10-07 09:58:21.894950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719845 ] 00:37:27.170 [2024-10-07 09:58:21.972219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:27.428 [2024-10-07 09:58:22.090084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.428 [2024-10-07 09:58:22.090136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:27.428 [2024-10-07 09:58:22.090140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.686 I/O targets: 00:37:27.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:27.686 00:37:27.686 00:37:27.686 CUnit - A unit testing framework for C - Version 2.1-3 00:37:27.686 http://cunit.sourceforge.net/ 00:37:27.686 00:37:27.686 00:37:27.686 Suite: bdevio tests on: Nvme1n1 00:37:27.686 Test: blockdev write read block ...passed 00:37:27.686 Test: blockdev write zeroes read block ...passed 00:37:27.686 Test: blockdev write zeroes read no split ...passed 00:37:27.686 Test: blockdev write zeroes read split ...passed 00:37:27.686 Test: blockdev write zeroes read split partial ...passed 00:37:27.686 Test: blockdev reset ...[2024-10-07 09:58:22.463834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:27.686 [2024-10-07 09:58:22.463948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171a0c0 (9): Bad file descriptor 00:37:27.944 [2024-10-07 09:58:22.556197] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:27.944 passed 00:37:27.944 Test: blockdev write read 8 blocks ...passed 00:37:27.944 Test: blockdev write read size > 128k ...passed 00:37:27.944 Test: blockdev write read invalid size ...passed 00:37:27.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:27.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:27.944 Test: blockdev write read max offset ...passed 00:37:27.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:27.944 Test: blockdev writev readv 8 blocks ...passed 00:37:27.944 Test: blockdev writev readv 30 x 1block ...passed 00:37:27.944 Test: blockdev writev readv block ...passed 00:37:28.202 Test: blockdev writev readv size > 128k ...passed 00:37:28.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:28.202 Test: blockdev comparev and writev ...[2024-10-07 09:58:22.769737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.769771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.769796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.769813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.770257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.770283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.770305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.770321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.770739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.770785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.770801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.771236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.771261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.771283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:28.202 [2024-10-07 09:58:22.771299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:28.202 passed 00:37:28.202 Test: blockdev nvme passthru rw ...passed 00:37:28.202 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:58:22.853273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:28.202 [2024-10-07 09:58:22.853300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.853474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:28.202 [2024-10-07 09:58:22.853497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.853652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:28.202 [2024-10-07 09:58:22.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:28.202 [2024-10-07 09:58:22.853833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:28.202 [2024-10-07 09:58:22.853856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:28.202 passed 00:37:28.202 Test: blockdev nvme admin passthru ...passed 00:37:28.202 Test: blockdev copy ...passed 00:37:28.202 00:37:28.202 Run Summary: Type Total Ran Passed Failed Inactive 00:37:28.202 suites 1 1 n/a 0 0 00:37:28.202 tests 23 23 23 0 0 00:37:28.202 asserts 152 152 152 0 n/a 00:37:28.202 00:37:28.202 Elapsed time = 1.267 seconds 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.461 rmmod nvme_tcp 00:37:28.461 rmmod nvme_fabrics 00:37:28.461 rmmod nvme_keyring 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1719687 ']' 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1719687 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1719687 ']' 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1719687 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:28.461 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1719687 00:37:28.719 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:37:28.719 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:37:28.719 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1719687' 00:37:28.719 killing process with pid 1719687 00:37:28.719 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1719687 00:37:28.719 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1719687 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.978 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.876 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.876 00:37:30.876 real 0m8.471s 00:37:30.876 user 0m10.184s 00:37:30.876 sys 0m3.275s 00:37:30.876 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:30.876 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.876 ************************************ 00:37:30.876 END TEST nvmf_bdevio 00:37:30.876 ************************************ 00:37:30.876 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:30.876 00:37:30.876 real 4m34.275s 00:37:30.876 user 9m51.399s 00:37:30.876 sys 1m42.378s 00:37:30.877 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:30.877 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:30.877 ************************************ 00:37:30.877 END TEST nvmf_target_core_interrupt_mode 00:37:30.877 ************************************ 00:37:31.136 09:58:25 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:31.136 09:58:25 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:31.136 09:58:25 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.136 09:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.136 ************************************ 00:37:31.136 START TEST nvmf_interrupt 00:37:31.136 ************************************ 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:31.136 * Looking for test storage... 00:37:31.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:31.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.136 --rc genhtml_branch_coverage=1 00:37:31.136 --rc genhtml_function_coverage=1 00:37:31.136 --rc genhtml_legend=1 00:37:31.136 --rc geninfo_all_blocks=1 00:37:31.136 --rc geninfo_unexecuted_blocks=1 00:37:31.136 00:37:31.136 ' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:31.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.136 --rc genhtml_branch_coverage=1 00:37:31.136 --rc genhtml_function_coverage=1 00:37:31.136 --rc genhtml_legend=1 00:37:31.136 --rc geninfo_all_blocks=1 00:37:31.136 --rc geninfo_unexecuted_blocks=1 00:37:31.136 00:37:31.136 ' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:31.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.136 --rc genhtml_branch_coverage=1 00:37:31.136 --rc genhtml_function_coverage=1 00:37:31.136 --rc genhtml_legend=1 00:37:31.136 --rc geninfo_all_blocks=1 00:37:31.136 --rc geninfo_unexecuted_blocks=1 00:37:31.136 00:37:31.136 ' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:31.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.136 --rc genhtml_branch_coverage=1 00:37:31.136 --rc genhtml_function_coverage=1 00:37:31.136 --rc genhtml_legend=1 00:37:31.136 --rc geninfo_all_blocks=1 00:37:31.136 --rc geninfo_unexecuted_blocks=1 00:37:31.136 00:37:31.136 ' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.136 09:58:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.137 09:58:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:34.421 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:34.421 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:34.421 Found net devices under 0000:84:00.0: cvl_0_0 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:34.421 Found net devices under 0000:84:00.1: cvl_0_1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:34.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:34.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:37:34.421 00:37:34.421 --- 10.0.0.2 ping statistics --- 00:37:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.421 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:34.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:34.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:37:34.421 00:37:34.421 --- 10.0.0.1 ping statistics --- 00:37:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.421 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:34.421 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1722072 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1722072 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1722072 ']' 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.422 09:58:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-10-07 09:58:28.848109] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:34.422 [2024-10-07 09:58:28.849150] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:37:34.422 [2024-10-07 09:58:28.849215] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.422 [2024-10-07 09:58:28.911331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:34.422 [2024-10-07 09:58:29.017133] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.422 [2024-10-07 09:58:29.017200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.422 [2024-10-07 09:58:29.017213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.422 [2024-10-07 09:58:29.017224] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.422 [2024-10-07 09:58:29.017234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.422 [2024-10-07 09:58:29.017991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.422 [2024-10-07 09:58:29.017997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.422 [2024-10-07 09:58:29.103834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.422 [2024-10-07 09:58:29.103863] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.422 [2024-10-07 09:58:29.104125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:34.422 5000+0 records in 00:37:34.422 5000+0 records out 00:37:34.422 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0240571 s, 426 MB/s 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.422 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.682 AIO0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.682 [2024-10-07 09:58:29.266674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:34.682 [2024-10-07 09:58:29.298998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1722072 0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 0 idle 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722072 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.30 reactor_0' 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722072 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.30 reactor_0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1722072 1 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 1 idle 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:34.682 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722076 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722076 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1722238 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1722072 0 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1722072 0 busy 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:34.941 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722072 root 20 0 128.2g 48768 35328 R 60.0 0.1 0:00.40 reactor_0' 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722072 root 20 0 128.2g 48768 35328 R 60.0 0.1 0:00.40 reactor_0 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1722072 1 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1722072 1 busy 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722076 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.21 reactor_1' 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722076 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.21 reactor_1 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:35.199 09:58:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:35.199 09:58:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1722238 00:37:45.171 Initializing NVMe Controllers 00:37:45.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:45.171 Controller IO queue size 256, less than required. 00:37:45.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:45.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:45.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:45.171 Initialization complete. Launching workers. 00:37:45.171 ======================================================== 00:37:45.171 Latency(us) 00:37:45.171 Device Information : IOPS MiB/s Average min max 00:37:45.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14701.60 57.43 17422.71 4441.89 21412.93 00:37:45.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14073.30 54.97 18201.90 4519.15 25383.00 00:37:45.171 ======================================================== 00:37:45.171 Total : 28774.90 112.40 17803.80 4441.89 25383.00 00:37:45.171 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1722072 0 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 0 idle 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:45.171 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:45.172 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:45.172 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:45.172 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:45.429 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722072 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.26 reactor_0' 00:37:45.429 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722072 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.26 reactor_0 00:37:45.429 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:45.429 09:58:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1722072 1 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 1 idle 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722076 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.98 reactor_1' 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722076 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.98 reactor_1 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:45.429 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:45.430 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:45.430 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:45.430 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:45.430 09:58:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:45.430 09:58:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:45.688 09:58:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:45.688 09:58:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:37:45.688 09:58:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:37:45.688 09:58:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:37:45.688 09:58:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1722072 0 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 0 idle 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:47.593 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722072 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.37 reactor_0' 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722072 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.37 reactor_0 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1722072 1 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1722072 1 idle 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1722072 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1722072 -w 256 00:37:47.852 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1722076 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.02 reactor_1' 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1722076 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.02 reactor_1 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:48.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:48.111 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.112 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.112 rmmod nvme_tcp 00:37:48.112 rmmod nvme_fabrics 00:37:48.372 rmmod nvme_keyring 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1722072 ']' 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1722072 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1722072 ']' 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1722072 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.372 09:58:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1722072 00:37:48.372 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:48.372 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:48.372 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1722072' 00:37:48.372 killing process with pid 1722072 00:37:48.372 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1722072 00:37:48.372 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1722072 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:48.641 09:58:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.198 09:58:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.198 00:37:51.198 real 0m19.648s 00:37:51.198 user 0m37.208s 00:37:51.198 sys 0m7.679s 00:37:51.198 09:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.198 09:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:51.198 ************************************ 00:37:51.198 END TEST nvmf_interrupt 00:37:51.198 ************************************ 00:37:51.198 00:37:51.198 real 28m55.833s 00:37:51.198 user 67m44.989s 00:37:51.199 sys 7m51.964s 00:37:51.199 09:58:45 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.199 09:58:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.199 ************************************ 00:37:51.199 END TEST nvmf_tcp 00:37:51.199 ************************************ 00:37:51.199 09:58:45 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:37:51.199 09:58:45 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:51.199 09:58:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:51.199 09:58:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.199 09:58:45 -- common/autotest_common.sh@10 -- # set +x 00:37:51.199 ************************************ 00:37:51.199 START TEST spdkcli_nvmf_tcp 00:37:51.199 ************************************ 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:51.199 * Looking for test storage... 00:37:51.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.199 --rc genhtml_branch_coverage=1 00:37:51.199 --rc genhtml_function_coverage=1 00:37:51.199 --rc genhtml_legend=1 00:37:51.199 --rc geninfo_all_blocks=1 00:37:51.199 --rc geninfo_unexecuted_blocks=1 00:37:51.199 00:37:51.199 ' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.199 --rc genhtml_branch_coverage=1 00:37:51.199 --rc genhtml_function_coverage=1 00:37:51.199 --rc genhtml_legend=1 00:37:51.199 --rc geninfo_all_blocks=1 00:37:51.199 --rc geninfo_unexecuted_blocks=1 00:37:51.199 00:37:51.199 ' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.199 --rc genhtml_branch_coverage=1 00:37:51.199 --rc genhtml_function_coverage=1 00:37:51.199 --rc genhtml_legend=1 00:37:51.199 --rc geninfo_all_blocks=1 00:37:51.199 --rc geninfo_unexecuted_blocks=1 00:37:51.199 00:37:51.199 ' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.199 --rc genhtml_branch_coverage=1 00:37:51.199 --rc genhtml_function_coverage=1 00:37:51.199 --rc genhtml_legend=1 00:37:51.199 --rc geninfo_all_blocks=1 00:37:51.199 --rc geninfo_unexecuted_blocks=1 00:37:51.199 00:37:51.199 ' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:51.199 09:58:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1724140 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1724140 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1724140 ']' 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:51.200 09:58:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.200 [2024-10-07 09:58:45.769186] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:37:51.200 [2024-10-07 09:58:45.769342] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724140 ] 00:37:51.200 [2024-10-07 09:58:45.859511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:51.200 [2024-10-07 09:58:45.980007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.200 [2024-10-07 09:58:45.980014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.458 09:58:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:51.458 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:51.458 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:51.458 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:51.458 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:51.458 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:51.458 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:51.458 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:51.458 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:51.458 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:51.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:51.459 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:51.459 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:51.459 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:51.459 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:51.459 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:51.459 ' 00:37:54.741 [2024-10-07 09:58:49.060079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.675 [2024-10-07 09:58:50.368724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:58.202 [2024-10-07 09:58:52.715929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:00.101 [2024-10-07 09:58:54.734331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:01.501 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:01.501 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:01.501 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:01.501 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:01.501 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:01.501 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:01.501 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:01.758 09:58:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:02.322 09:58:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.579 09:58:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:02.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:02.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:02.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:02.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:02.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:02.579 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:02.579 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:02.579 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:02.579 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:02.579 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:02.580 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:02.580 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:02.580 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:02.580 ' 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:09.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:09.138 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:09.138 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:09.138 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1724140 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1724140 ']' 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1724140 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724140 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724140' 00:38:09.138 killing process with pid 1724140 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1724140 00:38:09.138 09:59:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1724140 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1724140 ']' 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1724140 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1724140 ']' 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1724140 00:38:09.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1724140) - No such process 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1724140 is not found' 00:38:09.138 Process with pid 1724140 is not found 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:09.138 00:38:09.138 real 0m17.788s 00:38:09.138 user 0m38.270s 00:38:09.138 sys 0m0.985s 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:09.138 09:59:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:09.138 ************************************ 00:38:09.138 END TEST spdkcli_nvmf_tcp 00:38:09.138 ************************************ 00:38:09.138 09:59:03 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:09.138 09:59:03 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:09.138 09:59:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:09.138 09:59:03 -- common/autotest_common.sh@10 -- # set +x 00:38:09.138 ************************************ 00:38:09.138 START TEST nvmf_identify_passthru 00:38:09.138 ************************************ 00:38:09.138 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:09.138 * Looking for test storage... 00:38:09.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:09.138 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:09.138 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:38:09.138 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:09.138 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:09.138 09:59:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.139 --rc genhtml_branch_coverage=1 00:38:09.139 --rc genhtml_function_coverage=1 00:38:09.139 --rc genhtml_legend=1 00:38:09.139 --rc geninfo_all_blocks=1 00:38:09.139 --rc geninfo_unexecuted_blocks=1 00:38:09.139 00:38:09.139 ' 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.139 --rc genhtml_branch_coverage=1 00:38:09.139 --rc genhtml_function_coverage=1 00:38:09.139 --rc genhtml_legend=1 00:38:09.139 --rc geninfo_all_blocks=1 00:38:09.139 --rc geninfo_unexecuted_blocks=1 00:38:09.139 00:38:09.139 ' 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.139 --rc genhtml_branch_coverage=1 00:38:09.139 --rc genhtml_function_coverage=1 00:38:09.139 --rc genhtml_legend=1 00:38:09.139 --rc geninfo_all_blocks=1 00:38:09.139 --rc geninfo_unexecuted_blocks=1 00:38:09.139 00:38:09.139 ' 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.139 --rc genhtml_branch_coverage=1 00:38:09.139 --rc genhtml_function_coverage=1 00:38:09.139 --rc genhtml_legend=1 00:38:09.139 --rc geninfo_all_blocks=1 00:38:09.139 --rc geninfo_unexecuted_blocks=1 00:38:09.139 00:38:09.139 ' 00:38:09.139 09:59:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:09.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.139 09:59:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:09.139 09:59:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.139 09:59:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:09.139 09:59:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:09.139 09:59:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:11.672 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:11.672 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:11.672 Found net devices under 0000:84:00.0: cvl_0_0 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:11.672 Found net devices under 0000:84:00.1: cvl_0_1 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:11.672 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:11.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:11.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:38:11.673 00:38:11.673 --- 10.0.0.2 ping statistics --- 00:38:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.673 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:11.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:11.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:38:11.673 00:38:11.673 --- 10.0.0.1 ping statistics --- 00:38:11.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.673 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:11.673 09:59:06 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:38:11.673 09:59:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:82:00.0 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:11.673 09:59:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:15.856 09:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:38:15.856 09:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:38:15.856 09:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:15.856 09:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:20.035 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:20.035 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:20.035 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:20.035 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.294 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.294 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1728891 00:38:20.294 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:20.294 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:20.294 09:59:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1728891 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1728891 ']' 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:20.294 09:59:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.294 [2024-10-07 09:59:14.939358] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:38:20.294 [2024-10-07 09:59:14.939501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.294 [2024-10-07 09:59:15.034219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:20.553 [2024-10-07 09:59:15.158719] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.553 [2024-10-07 09:59:15.158788] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.553 [2024-10-07 09:59:15.158804] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.553 [2024-10-07 09:59:15.158817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.553 [2024-10-07 09:59:15.158829] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.553 [2024-10-07 09:59:15.160767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.553 [2024-10-07 09:59:15.160850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:20.553 [2024-10-07 09:59:15.160977] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:38:20.553 [2024-10-07 09:59:15.160982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:38:20.553 09:59:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.553 INFO: Log level set to 20 00:38:20.553 INFO: Requests: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "method": "nvmf_set_config", 00:38:20.553 "id": 1, 00:38:20.553 "params": { 00:38:20.553 "admin_cmd_passthru": { 00:38:20.553 "identify_ctrlr": true 00:38:20.553 } 00:38:20.553 } 00:38:20.553 } 00:38:20.553 00:38:20.553 INFO: response: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "id": 1, 00:38:20.553 "result": true 00:38:20.553 } 00:38:20.553 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.553 09:59:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.553 INFO: Setting log level to 20 00:38:20.553 INFO: Setting log level to 20 00:38:20.553 INFO: Log level set to 20 00:38:20.553 INFO: Log level set to 20 00:38:20.553 INFO: Requests: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "method": "framework_start_init", 00:38:20.553 "id": 1 00:38:20.553 } 00:38:20.553 00:38:20.553 INFO: Requests: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "method": "framework_start_init", 00:38:20.553 "id": 1 00:38:20.553 } 00:38:20.553 00:38:20.553 [2024-10-07 09:59:15.348626] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:20.553 INFO: response: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "id": 1, 00:38:20.553 "result": true 00:38:20.553 } 00:38:20.553 00:38:20.553 INFO: response: 00:38:20.553 { 00:38:20.553 "jsonrpc": "2.0", 00:38:20.553 "id": 1, 00:38:20.553 "result": true 00:38:20.553 } 00:38:20.553 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.553 09:59:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.553 INFO: Setting log level to 40 00:38:20.553 INFO: Setting log level to 40 00:38:20.553 INFO: Setting log level to 40 00:38:20.553 [2024-10-07 09:59:15.358677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.553 09:59:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:20.553 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:20.812 09:59:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:38:20.812 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.812 09:59:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 Nvme0n1 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 [2024-10-07 09:59:18.261007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 [ 00:38:24.095 { 00:38:24.095 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:24.095 "subtype": "Discovery", 00:38:24.095 "listen_addresses": [], 00:38:24.095 "allow_any_host": true, 00:38:24.095 "hosts": [] 00:38:24.095 }, 00:38:24.095 { 00:38:24.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.095 "subtype": "NVMe", 00:38:24.095 "listen_addresses": [ 00:38:24.095 { 00:38:24.095 "trtype": "TCP", 00:38:24.095 "adrfam": "IPv4", 00:38:24.095 "traddr": "10.0.0.2", 00:38:24.095 "trsvcid": "4420" 00:38:24.095 } 00:38:24.095 ], 00:38:24.095 "allow_any_host": true, 00:38:24.095 "hosts": [], 00:38:24.095 "serial_number": "SPDK00000000000001", 00:38:24.095 "model_number": "SPDK bdev Controller", 00:38:24.095 "max_namespaces": 1, 00:38:24.095 "min_cntlid": 1, 00:38:24.095 "max_cntlid": 65519, 00:38:24.095 "namespaces": [ 00:38:24.095 { 00:38:24.095 "nsid": 1, 00:38:24.095 "bdev_name": "Nvme0n1", 00:38:24.095 "name": "Nvme0n1", 00:38:24.095 "nguid": "5DE0A0DF18BA4F849501A596BADC4DA3", 00:38:24.095 "uuid": "5de0a0df-18ba-4f84-9501-a596badc4da3" 00:38:24.095 } 00:38:24.095 ] 00:38:24.095 } 00:38:24.095 ] 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:24.095 09:59:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.095 rmmod nvme_tcp 00:38:24.095 rmmod nvme_fabrics 00:38:24.095 rmmod nvme_keyring 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1728891 ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1728891 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1728891 ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1728891 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1728891 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1728891' 00:38:24.095 killing process with pid 1728891 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1728891 00:38:24.095 09:59:18 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1728891 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:26.023 09:59:20 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.023 09:59:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:26.023 09:59:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.924 09:59:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.924 00:38:27.924 real 0m19.204s 00:38:27.924 user 0m27.230s 00:38:27.924 sys 0m3.858s 00:38:27.924 09:59:22 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.924 09:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:27.924 ************************************ 00:38:27.924 END TEST nvmf_identify_passthru 00:38:27.924 ************************************ 00:38:27.924 09:59:22 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:27.924 09:59:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:27.924 09:59:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.924 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:38:27.924 ************************************ 00:38:27.924 START TEST nvmf_dif 00:38:27.924 ************************************ 00:38:27.924 09:59:22 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:27.924 * Looking for test storage... 00:38:27.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:27.924 09:59:22 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:27.924 09:59:22 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:38:27.924 09:59:22 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:27.924 09:59:22 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.924 09:59:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.925 --rc genhtml_branch_coverage=1 00:38:27.925 --rc genhtml_function_coverage=1 00:38:27.925 --rc genhtml_legend=1 00:38:27.925 --rc geninfo_all_blocks=1 00:38:27.925 --rc geninfo_unexecuted_blocks=1 00:38:27.925 00:38:27.925 ' 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.925 --rc genhtml_branch_coverage=1 00:38:27.925 --rc genhtml_function_coverage=1 00:38:27.925 --rc genhtml_legend=1 00:38:27.925 --rc geninfo_all_blocks=1 00:38:27.925 --rc geninfo_unexecuted_blocks=1 00:38:27.925 00:38:27.925 ' 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.925 --rc genhtml_branch_coverage=1 00:38:27.925 --rc genhtml_function_coverage=1 00:38:27.925 --rc genhtml_legend=1 00:38:27.925 --rc geninfo_all_blocks=1 00:38:27.925 --rc geninfo_unexecuted_blocks=1 00:38:27.925 00:38:27.925 ' 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.925 --rc genhtml_branch_coverage=1 00:38:27.925 --rc genhtml_function_coverage=1 00:38:27.925 --rc genhtml_legend=1 00:38:27.925 --rc geninfo_all_blocks=1 00:38:27.925 --rc geninfo_unexecuted_blocks=1 00:38:27.925 00:38:27.925 ' 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.925 09:59:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.925 09:59:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.925 09:59:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.925 09:59:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.925 09:59:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:27.925 09:59:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:27.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:27.925 09:59:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:27.925 09:59:22 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:27.925 09:59:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.184 09:59:22 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:28.184 09:59:22 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:28.184 09:59:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.184 09:59:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:30.716 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:30.716 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:30.716 Found net devices under 0000:84:00.0: cvl_0_0 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:30.716 Found net devices under 0000:84:00.1: cvl_0_1 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:30.716 09:59:25 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:30.717 09:59:25 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:30.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:38:30.975 00:38:30.975 --- 10.0.0.2 ping statistics --- 00:38:30.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.975 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:30.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:38:30.975 00:38:30.975 --- 10.0.0.1 ping statistics --- 00:38:30.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.975 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:38:30.975 09:59:25 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:32.350 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:32.350 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:32.350 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:32.350 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:32.350 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:32.350 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:32.350 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:32.350 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:32.350 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:32.350 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:32.350 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:32.350 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:32.350 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:32.350 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:32.350 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:32.350 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:32.350 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:32.350 09:59:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:32.350 09:59:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1732187 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:32.350 09:59:27 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1732187 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1732187 ']' 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:32.350 09:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:32.609 [2024-10-07 09:59:27.196129] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:38:32.609 [2024-10-07 09:59:27.196233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:32.609 [2024-10-07 09:59:27.272902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.609 [2024-10-07 09:59:27.397167] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:32.609 [2024-10-07 09:59:27.397233] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:32.609 [2024-10-07 09:59:27.397249] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:32.609 [2024-10-07 09:59:27.397263] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:32.609 [2024-10-07 09:59:27.397274] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:32.609 [2024-10-07 09:59:27.397979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.867 09:59:27 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:32.867 09:59:27 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:38:32.867 09:59:27 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:32.867 09:59:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:32.867 09:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 09:59:27 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:33.126 09:59:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:33.126 09:59:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 [2024-10-07 09:59:27.702451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.126 09:59:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 ************************************ 00:38:33.126 START TEST fio_dif_1_default 00:38:33.126 ************************************ 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 bdev_null0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:33.126 [2024-10-07 09:59:27.758750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:33.126 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:33.126 { 00:38:33.126 "params": { 00:38:33.126 "name": "Nvme$subsystem", 00:38:33.126 "trtype": "$TEST_TRANSPORT", 00:38:33.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.126 "adrfam": "ipv4", 00:38:33.126 "trsvcid": "$NVMF_PORT", 00:38:33.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.126 "hdgst": ${hdgst:-false}, 00:38:33.126 "ddgst": ${ddgst:-false} 00:38:33.126 }, 00:38:33.126 "method": "bdev_nvme_attach_controller" 00:38:33.126 } 00:38:33.126 EOF 00:38:33.126 )") 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:33.127 "params": { 00:38:33.127 "name": "Nvme0", 00:38:33.127 "trtype": "tcp", 00:38:33.127 "traddr": "10.0.0.2", 00:38:33.127 "adrfam": "ipv4", 00:38:33.127 "trsvcid": "4420", 00:38:33.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.127 "hdgst": false, 00:38:33.127 "ddgst": false 00:38:33.127 }, 00:38:33.127 "method": "bdev_nvme_attach_controller" 00:38:33.127 }' 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:33.127 09:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:33.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:33.385 fio-3.35 00:38:33.385 Starting 1 thread 00:38:45.588 00:38:45.588 filename0: (groupid=0, jobs=1): err= 0: pid=1732426: Mon Oct 7 09:59:38 2024 00:38:45.588 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10027msec) 00:38:45.588 slat (nsec): min=5419, max=35921, avg=12723.40, stdev=3584.32 00:38:45.588 clat (usec): min=637, max=45617, avg=41221.93, stdev=2668.11 00:38:45.588 lat (usec): min=648, max=45639, avg=41234.66, stdev=2668.12 00:38:45.588 clat percentiles (usec): 00:38:45.588 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:45.588 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:38:45.588 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:45.588 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:38:45.588 | 99.99th=[45876] 00:38:45.588 bw ( KiB/s): min= 384, max= 416, per=99.81%, avg=387.20, stdev= 9.85, samples=20 00:38:45.588 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:38:45.588 lat (usec) : 750=0.41% 00:38:45.588 lat (msec) : 50=99.59% 00:38:45.588 cpu : usr=90.73%, sys=8.92%, ctx=13, majf=0, minf=9 00:38:45.588 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.588 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.588 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:45.588 00:38:45.588 Run status group 0 (all jobs): 00:38:45.588 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10027-10027msec 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 00:38:45.588 real 0m11.395s 00:38:45.588 user 0m10.455s 00:38:45.588 sys 0m1.239s 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 ************************************ 00:38:45.588 END TEST fio_dif_1_default 00:38:45.588 ************************************ 00:38:45.588 09:59:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:45.588 09:59:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:45.588 09:59:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 ************************************ 00:38:45.588 START TEST fio_dif_1_multi_subsystems 00:38:45.588 ************************************ 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 bdev_null0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 [2024-10-07 09:59:39.218326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 bdev_null1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:45.588 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:45.589 { 00:38:45.589 "params": { 00:38:45.589 "name": "Nvme$subsystem", 00:38:45.589 "trtype": "$TEST_TRANSPORT", 00:38:45.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.589 "adrfam": "ipv4", 00:38:45.589 "trsvcid": "$NVMF_PORT", 00:38:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.589 "hdgst": ${hdgst:-false}, 00:38:45.589 "ddgst": ${ddgst:-false} 00:38:45.589 }, 00:38:45.589 "method": "bdev_nvme_attach_controller" 00:38:45.589 } 00:38:45.589 EOF 00:38:45.589 )") 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:45.589 { 00:38:45.589 "params": { 00:38:45.589 "name": "Nvme$subsystem", 00:38:45.589 "trtype": "$TEST_TRANSPORT", 00:38:45.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.589 "adrfam": "ipv4", 00:38:45.589 "trsvcid": "$NVMF_PORT", 00:38:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.589 "hdgst": ${hdgst:-false}, 00:38:45.589 "ddgst": ${ddgst:-false} 00:38:45.589 }, 00:38:45.589 "method": "bdev_nvme_attach_controller" 00:38:45.589 } 00:38:45.589 EOF 00:38:45.589 )") 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:45.589 "params": { 00:38:45.589 "name": "Nvme0", 00:38:45.589 "trtype": "tcp", 00:38:45.589 "traddr": "10.0.0.2", 00:38:45.589 "adrfam": "ipv4", 00:38:45.589 "trsvcid": "4420", 00:38:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.589 "hdgst": false, 00:38:45.589 "ddgst": false 00:38:45.589 }, 00:38:45.589 "method": "bdev_nvme_attach_controller" 00:38:45.589 },{ 00:38:45.589 "params": { 00:38:45.589 "name": "Nvme1", 00:38:45.589 "trtype": "tcp", 00:38:45.589 "traddr": "10.0.0.2", 00:38:45.589 "adrfam": "ipv4", 00:38:45.589 "trsvcid": "4420", 00:38:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:45.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:45.589 "hdgst": false, 00:38:45.589 "ddgst": false 00:38:45.589 }, 00:38:45.589 "method": "bdev_nvme_attach_controller" 00:38:45.589 }' 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:45.589 09:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.589 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:45.589 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:45.589 fio-3.35 00:38:45.589 Starting 2 threads 00:38:57.789 00:38:57.789 filename0: (groupid=0, jobs=1): err= 0: pid=1733941: Mon Oct 7 09:59:50 2024 00:38:57.789 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10011msec) 00:38:57.789 slat (nsec): min=5987, max=35246, avg=10726.35, stdev=2865.56 00:38:57.789 clat (usec): min=786, max=44638, avg=40660.57, stdev=3617.80 00:38:57.789 lat (usec): min=795, max=44654, avg=40671.29, stdev=3617.81 00:38:57.789 clat percentiles (usec): 00:38:57.789 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:57.789 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:57.789 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:57.789 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:38:57.789 | 99.99th=[44827] 00:38:57.789 bw ( KiB/s): min= 384, max= 416, per=50.06%, avg=392.00, stdev=14.22, samples=20 00:38:57.789 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:38:57.789 lat (usec) : 1000=0.81% 00:38:57.789 lat (msec) : 50=99.19% 00:38:57.789 cpu : usr=95.17%, sys=4.54%, ctx=12, majf=0, minf=82 00:38:57.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:57.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.789 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:57.789 filename1: (groupid=0, jobs=1): err= 0: pid=1733942: Mon Oct 7 09:59:50 2024 00:38:57.789 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:38:57.789 slat (nsec): min=8050, max=20369, avg=10875.14, stdev=2962.02 00:38:57.789 clat (usec): min=40851, max=45631, avg=40997.54, stdev=317.38 00:38:57.789 lat (usec): min=40860, max=45648, avg=41008.42, stdev=317.50 00:38:57.789 clat percentiles (usec): 00:38:57.789 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:57.789 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:57.789 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:57.789 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:38:57.789 | 99.99th=[45876] 00:38:57.789 bw ( KiB/s): min= 384, max= 416, per=49.55%, avg=388.80, stdev=11.72, samples=20 00:38:57.789 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:57.789 lat (msec) : 50=100.00% 00:38:57.789 cpu : usr=96.04%, sys=3.68%, ctx=13, majf=0, minf=25 00:38:57.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:57.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.789 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:57.789 00:38:57.789 Run status group 0 (all jobs): 00:38:57.789 READ: bw=783KiB/s (802kB/s), 390KiB/s-393KiB/s (399kB/s-403kB/s), io=7840KiB (8028kB), run=10011-10012msec 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.789 00:38:57.789 real 0m11.506s 00:38:57.789 user 0m20.765s 00:38:57.789 sys 0m1.120s 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 ************************************ 00:38:57.789 END TEST fio_dif_1_multi_subsystems 00:38:57.789 ************************************ 00:38:57.789 09:59:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:57.789 09:59:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:57.789 09:59:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 ************************************ 00:38:57.789 START TEST fio_dif_rand_params 00:38:57.789 ************************************ 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.789 bdev_null0 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.789 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.790 [2024-10-07 09:59:50.770961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:57.790 { 00:38:57.790 "params": { 00:38:57.790 "name": "Nvme$subsystem", 00:38:57.790 "trtype": "$TEST_TRANSPORT", 00:38:57.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:57.790 "adrfam": "ipv4", 00:38:57.790 "trsvcid": "$NVMF_PORT", 00:38:57.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:57.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:57.790 "hdgst": ${hdgst:-false}, 00:38:57.790 "ddgst": ${ddgst:-false} 00:38:57.790 }, 00:38:57.790 "method": "bdev_nvme_attach_controller" 00:38:57.790 } 00:38:57.790 EOF 00:38:57.790 )") 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:57.790 "params": { 00:38:57.790 "name": "Nvme0", 00:38:57.790 "trtype": "tcp", 00:38:57.790 "traddr": "10.0.0.2", 00:38:57.790 "adrfam": "ipv4", 00:38:57.790 "trsvcid": "4420", 00:38:57.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:57.790 "hdgst": false, 00:38:57.790 "ddgst": false 00:38:57.790 }, 00:38:57.790 "method": "bdev_nvme_attach_controller" 00:38:57.790 }' 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:57.790 09:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.790 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:57.790 ... 00:38:57.790 fio-3.35 00:38:57.790 Starting 3 threads 00:39:01.998 00:39:01.998 filename0: (groupid=0, jobs=1): err= 0: pid=1735214: Mon Oct 7 09:59:56 2024 00:39:01.998 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5047msec) 00:39:01.998 slat (nsec): min=5154, max=73529, avg=19041.84, stdev=5163.58 00:39:01.998 clat (usec): min=7666, max=55034, avg=14603.34, stdev=4247.13 00:39:01.998 lat (usec): min=7681, max=55048, avg=14622.38, stdev=4247.07 00:39:01.998 clat percentiles (usec): 00:39:01.998 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11600], 20.00th=[12387], 00:39:01.998 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14353], 60.00th=[14877], 00:39:01.998 | 70.00th=[15401], 80.00th=[16057], 90.00th=[16909], 95.00th=[17695], 00:39:01.998 | 99.00th=[47973], 99.50th=[50070], 99.90th=[52691], 99.95th=[54789], 00:39:01.998 | 99.99th=[54789] 00:39:01.998 bw ( KiB/s): min=23296, max=28416, per=32.57%, avg=26368.00, stdev=1794.03, samples=10 00:39:01.998 iops : min= 182, max= 222, avg=206.00, stdev=14.02, samples=10 00:39:01.998 lat (msec) : 10=2.52%, 20=96.03%, 50=0.87%, 100=0.58% 00:39:01.998 cpu : usr=91.26%, sys=6.16%, ctx=123, majf=0, minf=0 00:39:01.998 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.998 filename0: (groupid=0, jobs=1): err= 0: pid=1735215: Mon Oct 7 09:59:56 2024 00:39:01.998 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5007msec) 00:39:01.998 slat (nsec): min=4685, max=41151, avg=19666.20, stdev=3732.76 00:39:01.998 clat (usec): min=7220, max=54523, avg=13985.51, stdev=3473.49 00:39:01.998 lat (usec): min=7242, max=54545, avg=14005.17, stdev=3473.43 00:39:01.998 clat percentiles (usec): 00:39:01.998 | 1.00th=[ 8356], 5.00th=[10552], 10.00th=[11600], 20.00th=[12387], 00:39:01.998 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:39:01.998 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16188], 95.00th=[16909], 00:39:01.998 | 99.00th=[18744], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:39:01.998 | 99.99th=[54264] 00:39:01.998 bw ( KiB/s): min=26112, max=29184, per=33.80%, avg=27366.40, stdev=1055.17, samples=10 00:39:01.998 iops : min= 204, max= 228, avg=213.80, stdev= 8.24, samples=10 00:39:01.998 lat (msec) : 10=4.01%, 20=95.34%, 50=0.09%, 100=0.56% 00:39:01.998 cpu : usr=94.55%, sys=4.91%, ctx=9, majf=0, minf=0 00:39:01.998 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.998 filename0: (groupid=0, jobs=1): err= 0: pid=1735216: Mon Oct 7 09:59:56 2024 00:39:01.998 read: IOPS=215, BW=26.9MiB/s (28.3MB/s)(136MiB/5047msec) 00:39:01.998 slat (nsec): min=6686, max=67418, avg=21680.08, stdev=6794.35 00:39:01.998 clat (usec): min=8917, max=59426, avg=13848.40, stdev=3934.35 00:39:01.998 lat (usec): min=8933, max=59453, avg=13870.08, stdev=3934.48 00:39:01.998 clat percentiles (usec): 00:39:01.998 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:39:01.998 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13566], 60.00th=[13960], 00:39:01.998 | 70.00th=[14353], 80.00th=[15008], 90.00th=[15926], 95.00th=[16909], 00:39:01.998 | 99.00th=[18744], 99.50th=[52691], 99.90th=[59507], 99.95th=[59507], 00:39:01.998 | 99.99th=[59507] 00:39:01.998 bw ( KiB/s): min=24320, max=30976, per=34.34%, avg=27801.60, stdev=2030.86, samples=10 00:39:01.998 iops : min= 190, max= 242, avg=217.20, stdev=15.87, samples=10 00:39:01.998 lat (msec) : 10=1.75%, 20=97.52%, 50=0.09%, 100=0.64% 00:39:01.998 cpu : usr=94.37%, sys=4.66%, ctx=122, majf=0, minf=3 00:39:01.998 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.998 issued rwts: total=1088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.998 00:39:01.998 Run status group 0 (all jobs): 00:39:01.998 READ: bw=79.1MiB/s (82.9MB/s), 25.6MiB/s-26.9MiB/s (26.8MB/s-28.3MB/s), io=399MiB (418MB), run=5007-5047msec 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 bdev_null0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.257 [2024-10-07 09:59:56.992179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:02.257 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:02.258 09:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:02.258 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 bdev_null1 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 bdev_null2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:02.258 { 00:39:02.258 "params": { 00:39:02.258 "name": "Nvme$subsystem", 00:39:02.258 "trtype": "$TEST_TRANSPORT", 00:39:02.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.258 "adrfam": "ipv4", 00:39:02.258 "trsvcid": "$NVMF_PORT", 00:39:02.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.258 "hdgst": ${hdgst:-false}, 00:39:02.258 "ddgst": ${ddgst:-false} 00:39:02.258 }, 00:39:02.258 "method": "bdev_nvme_attach_controller" 00:39:02.258 } 00:39:02.258 EOF 00:39:02.258 )") 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:02.258 { 00:39:02.258 "params": { 00:39:02.258 "name": "Nvme$subsystem", 00:39:02.258 "trtype": "$TEST_TRANSPORT", 00:39:02.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.258 "adrfam": "ipv4", 00:39:02.258 "trsvcid": "$NVMF_PORT", 00:39:02.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.258 "hdgst": ${hdgst:-false}, 00:39:02.258 "ddgst": ${ddgst:-false} 00:39:02.258 }, 00:39:02.258 "method": "bdev_nvme_attach_controller" 00:39:02.258 } 00:39:02.258 EOF 00:39:02.258 )") 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:02.258 { 00:39:02.258 "params": { 00:39:02.258 "name": "Nvme$subsystem", 00:39:02.258 "trtype": "$TEST_TRANSPORT", 00:39:02.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.258 "adrfam": "ipv4", 00:39:02.258 "trsvcid": "$NVMF_PORT", 00:39:02.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.258 "hdgst": ${hdgst:-false}, 00:39:02.258 "ddgst": ${ddgst:-false} 00:39:02.258 }, 00:39:02.258 "method": "bdev_nvme_attach_controller" 00:39:02.258 } 00:39:02.258 EOF 00:39:02.258 )") 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:02.258 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:02.517 "params": { 00:39:02.517 "name": "Nvme0", 00:39:02.517 "trtype": "tcp", 00:39:02.517 "traddr": "10.0.0.2", 00:39:02.517 "adrfam": "ipv4", 00:39:02.517 "trsvcid": "4420", 00:39:02.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:02.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:02.517 "hdgst": false, 00:39:02.517 "ddgst": false 00:39:02.517 }, 00:39:02.517 "method": "bdev_nvme_attach_controller" 00:39:02.517 },{ 00:39:02.517 "params": { 00:39:02.517 "name": "Nvme1", 00:39:02.517 "trtype": "tcp", 00:39:02.517 "traddr": "10.0.0.2", 00:39:02.517 "adrfam": "ipv4", 00:39:02.517 "trsvcid": "4420", 00:39:02.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:02.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:02.517 "hdgst": false, 00:39:02.517 "ddgst": false 00:39:02.517 }, 00:39:02.517 "method": "bdev_nvme_attach_controller" 00:39:02.517 },{ 00:39:02.517 "params": { 00:39:02.517 "name": "Nvme2", 00:39:02.517 "trtype": "tcp", 00:39:02.517 "traddr": "10.0.0.2", 00:39:02.517 "adrfam": "ipv4", 00:39:02.517 "trsvcid": "4420", 00:39:02.517 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:02.517 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:02.517 "hdgst": false, 00:39:02.517 "ddgst": false 00:39:02.517 }, 00:39:02.517 "method": "bdev_nvme_attach_controller" 00:39:02.517 }' 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:02.517 09:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:02.775 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:02.775 ... 00:39:02.775 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:02.775 ... 00:39:02.775 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:02.775 ... 00:39:02.775 fio-3.35 00:39:02.775 Starting 24 threads 00:39:14.978 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736079: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10104msec) 00:39:14.978 slat (usec): min=11, max=102, avg=71.33, stdev=14.36 00:39:14.978 clat (msec): min=96, max=395, avg=272.44, stdev=45.82 00:39:14.978 lat (msec): min=96, max=395, avg=272.52, stdev=45.83 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 97], 5.00th=[ 194], 10.00th=[ 201], 20.00th=[ 247], 00:39:14.978 | 30.00th=[ 259], 40.00th=[ 266], 50.00th=[ 292], 60.00th=[ 296], 00:39:14.978 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:39:14.978 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 397], 99.95th=[ 397], 00:39:14.978 | 99.99th=[ 397] 00:39:14.978 bw ( KiB/s): min= 128, max= 256, per=3.79%, avg=230.40, stdev=52.53, samples=20 00:39:14.978 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:39:14.978 lat (msec) : 100=2.70%, 250=20.61%, 500=76.69% 00:39:14.978 cpu : usr=98.12%, sys=1.42%, ctx=9, majf=0, minf=9 00:39:14.978 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736080: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10082msec) 00:39:14.978 slat (nsec): min=8362, max=79939, avg=15718.35, stdev=11272.48 00:39:14.978 clat (msec): min=104, max=377, avg=234.10, stdev=48.82 00:39:14.978 lat (msec): min=104, max=377, avg=234.12, stdev=48.82 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 105], 5.00th=[ 163], 10.00th=[ 186], 20.00th=[ 197], 00:39:14.978 | 30.00th=[ 199], 40.00th=[ 209], 50.00th=[ 224], 60.00th=[ 253], 00:39:14.978 | 70.00th=[ 262], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 309], 00:39:14.978 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 380], 99.95th=[ 380], 00:39:14.978 | 99.99th=[ 380] 00:39:14.978 bw ( KiB/s): min= 128, max= 384, per=4.42%, avg=268.80, stdev=66.60, samples=20 00:39:14.978 iops : min= 32, max= 96, avg=67.20, stdev=16.65, samples=20 00:39:14.978 lat (msec) : 250=57.70%, 500=42.30% 00:39:14.978 cpu : usr=98.32%, sys=1.27%, ctx=15, majf=0, minf=9 00:39:14.978 IO depths : 1=3.9%, 2=9.3%, 4=22.4%, 8=55.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736081: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=76, BW=304KiB/s (311kB/s)(3072KiB/10103msec) 00:39:14.978 slat (nsec): min=8512, max=82133, avg=19140.07, stdev=16288.72 00:39:14.978 clat (msec): min=139, max=338, avg=209.03, stdev=27.00 00:39:14.978 lat (msec): min=139, max=338, avg=209.05, stdev=27.01 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 140], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 194], 00:39:14.978 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 207], 00:39:14.978 | 70.00th=[ 213], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 262], 00:39:14.978 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 338], 99.95th=[ 338], 00:39:14.978 | 99.99th=[ 338] 00:39:14.978 bw ( KiB/s): min= 256, max= 384, per=4.95%, avg=300.80, stdev=56.29, samples=20 00:39:14.978 iops : min= 64, max= 96, avg=75.20, stdev=14.07, samples=20 00:39:14.978 lat (msec) : 250=85.68%, 500=14.32% 00:39:14.978 cpu : usr=98.26%, sys=1.28%, ctx=25, majf=0, minf=9 00:39:14.978 IO depths : 1=1.3%, 2=7.6%, 4=25.0%, 8=54.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736082: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10105msec) 00:39:14.978 slat (usec): min=11, max=101, avg=65.61, stdev=21.25 00:39:14.978 clat (msec): min=96, max=412, avg=272.56, stdev=56.73 00:39:14.978 lat (msec): min=96, max=412, avg=272.63, stdev=56.75 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 97], 5.00th=[ 159], 10.00th=[ 186], 20.00th=[ 243], 00:39:14.978 | 30.00th=[ 253], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 296], 00:39:14.978 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 334], 00:39:14.978 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:39:14.978 | 99.99th=[ 414] 00:39:14.978 bw ( KiB/s): min= 128, max= 368, per=3.79%, avg=230.40, stdev=60.85, samples=20 00:39:14.978 iops : min= 32, max= 92, avg=57.60, stdev=15.21, samples=20 00:39:14.978 lat (msec) : 100=2.70%, 250=20.61%, 500=76.69% 00:39:14.978 cpu : usr=98.44%, sys=1.12%, ctx=12, majf=0, minf=9 00:39:14.978 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736083: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10081msec) 00:39:14.978 slat (usec): min=17, max=107, avg=71.99, stdev=14.05 00:39:14.978 clat (msec): min=179, max=447, avg=287.39, stdev=35.68 00:39:14.978 lat (msec): min=179, max=447, avg=287.46, stdev=35.69 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 192], 5.00th=[ 234], 10.00th=[ 249], 20.00th=[ 259], 00:39:14.978 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:39:14.978 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 347], 00:39:14.978 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 447], 00:39:14.978 | 99.99th=[ 447] 00:39:14.978 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=58.59, samples=20 00:39:14.978 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:39:14.978 lat (msec) : 250=11.43%, 500=88.57% 00:39:14.978 cpu : usr=98.45%, sys=1.11%, ctx=15, majf=0, minf=9 00:39:14.978 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736084: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=72, BW=291KiB/s (298kB/s)(2944KiB/10103msec) 00:39:14.978 slat (nsec): min=6421, max=97914, avg=27688.49, stdev=26221.87 00:39:14.978 clat (msec): min=155, max=353, avg=218.08, stdev=30.40 00:39:14.978 lat (msec): min=155, max=353, avg=218.10, stdev=30.42 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 197], 00:39:14.978 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 211], 00:39:14.978 | 70.00th=[ 226], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 275], 00:39:14.978 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:39:14.978 | 99.99th=[ 355] 00:39:14.978 bw ( KiB/s): min= 144, max= 384, per=4.73%, avg=288.00, stdev=63.37, samples=20 00:39:14.978 iops : min= 36, max= 96, avg=72.00, stdev=15.84, samples=20 00:39:14.978 lat (msec) : 250=80.57%, 500=19.43% 00:39:14.978 cpu : usr=98.32%, sys=1.24%, ctx=38, majf=0, minf=9 00:39:14.978 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736085: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=59, BW=237KiB/s (243kB/s)(2392KiB/10096msec) 00:39:14.978 slat (usec): min=7, max=115, avg=57.65, stdev=26.11 00:39:14.978 clat (msec): min=161, max=447, avg=269.70, stdev=50.46 00:39:14.978 lat (msec): min=161, max=447, avg=269.76, stdev=50.46 00:39:14.978 clat percentiles (msec): 00:39:14.978 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 230], 00:39:14.978 | 30.00th=[ 253], 40.00th=[ 264], 50.00th=[ 279], 60.00th=[ 292], 00:39:14.978 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:39:14.978 | 99.00th=[ 414], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:39:14.978 | 99.99th=[ 447] 00:39:14.978 bw ( KiB/s): min= 128, max= 384, per=3.83%, avg=232.80, stdev=69.16, samples=20 00:39:14.978 iops : min= 32, max= 96, avg=58.20, stdev=17.29, samples=20 00:39:14.978 lat (msec) : 250=28.43%, 500=71.57% 00:39:14.978 cpu : usr=98.38%, sys=1.19%, ctx=14, majf=0, minf=9 00:39:14.978 IO depths : 1=2.8%, 2=8.2%, 4=22.2%, 8=57.0%, 16=9.7%, 32=0.0%, >=64=0.0% 00:39:14.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.978 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.978 filename0: (groupid=0, jobs=1): err= 0: pid=1736086: Mon Oct 7 10:00:08 2024 00:39:14.978 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10083msec) 00:39:14.979 slat (usec): min=5, max=104, avg=71.11, stdev=16.55 00:39:14.979 clat (msec): min=159, max=447, avg=287.49, stdev=42.80 00:39:14.979 lat (msec): min=159, max=447, avg=287.56, stdev=42.81 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 171], 5.00th=[ 199], 10.00th=[ 241], 20.00th=[ 255], 00:39:14.979 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:39:14.979 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 351], 00:39:14.979 | 99.00th=[ 418], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:39:14.979 | 99.99th=[ 447] 00:39:14.979 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=56.96, samples=20 00:39:14.979 iops : min= 32, max= 64, avg=54.40, stdev=14.24, samples=20 00:39:14.979 lat (msec) : 250=13.57%, 500=86.43% 00:39:14.979 cpu : usr=98.37%, sys=1.18%, ctx=15, majf=0, minf=9 00:39:14.979 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736087: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10082msec) 00:39:14.979 slat (usec): min=8, max=101, avg=53.07, stdev=30.21 00:39:14.979 clat (msec): min=169, max=403, avg=257.35, stdev=47.55 00:39:14.979 lat (msec): min=169, max=403, avg=257.41, stdev=47.57 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 201], 00:39:14.979 | 30.00th=[ 226], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 271], 00:39:14.979 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 309], 95.00th=[ 330], 00:39:14.979 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 405], 99.95th=[ 405], 00:39:14.979 | 99.99th=[ 405] 00:39:14.979 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=243.20, stdev=69.37, samples=20 00:39:14.979 iops : min= 32, max= 96, avg=60.80, stdev=17.34, samples=20 00:39:14.979 lat (msec) : 250=40.87%, 500=59.13% 00:39:14.979 cpu : usr=98.37%, sys=1.20%, ctx=14, majf=0, minf=11 00:39:14.979 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736088: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10103msec) 00:39:14.979 slat (usec): min=7, max=107, avg=73.89, stdev=15.40 00:39:14.979 clat (msec): min=161, max=402, avg=286.36, stdev=36.20 00:39:14.979 lat (msec): min=161, max=402, avg=286.43, stdev=36.20 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 165], 5.00th=[ 236], 10.00th=[ 245], 20.00th=[ 253], 00:39:14.979 | 30.00th=[ 266], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:39:14.979 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 330], 00:39:14.979 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:39:14.979 | 99.99th=[ 401] 00:39:14.979 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=55.28, samples=20 00:39:14.979 iops : min= 32, max= 64, avg=54.40, stdev=13.82, samples=20 00:39:14.979 lat (msec) : 250=15.54%, 500=84.46% 00:39:14.979 cpu : usr=98.14%, sys=1.42%, ctx=14, majf=0, minf=9 00:39:14.979 IO depths : 1=2.3%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736089: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=80, BW=322KiB/s (329kB/s)(3256KiB/10123msec) 00:39:14.979 slat (nsec): min=4851, max=56630, avg=20486.58, stdev=5877.48 00:39:14.979 clat (msec): min=45, max=339, avg=198.32, stdev=45.24 00:39:14.979 lat (msec): min=45, max=339, avg=198.34, stdev=45.24 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 46], 5.00th=[ 130], 10.00th=[ 157], 20.00th=[ 176], 00:39:14.979 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 203], 00:39:14.979 | 70.00th=[ 207], 80.00th=[ 224], 90.00th=[ 253], 95.00th=[ 262], 00:39:14.979 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 338], 00:39:14.979 | 99.99th=[ 338] 00:39:14.979 bw ( KiB/s): min= 256, max= 512, per=5.26%, avg=319.20, stdev=60.20, samples=20 00:39:14.979 iops : min= 64, max= 128, avg=79.80, stdev=15.05, samples=20 00:39:14.979 lat (msec) : 50=1.97%, 100=1.97%, 250=83.29%, 500=12.78% 00:39:14.979 cpu : usr=97.79%, sys=1.64%, ctx=28, majf=0, minf=9 00:39:14.979 IO depths : 1=1.4%, 2=3.8%, 4=13.4%, 8=70.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736090: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=77, BW=311KiB/s (319kB/s)(3144KiB/10105msec) 00:39:14.979 slat (nsec): min=8273, max=87410, avg=16615.23, stdev=17664.86 00:39:14.979 clat (msec): min=116, max=322, avg=205.33, stdev=37.20 00:39:14.979 lat (msec): min=116, max=322, avg=205.34, stdev=37.21 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 117], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 174], 00:39:14.979 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 207], 00:39:14.979 | 70.00th=[ 211], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 271], 00:39:14.979 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:39:14.979 | 99.99th=[ 321] 00:39:14.979 bw ( KiB/s): min= 256, max= 384, per=5.08%, avg=308.00, stdev=37.03, samples=20 00:39:14.979 iops : min= 64, max= 96, avg=77.00, stdev= 9.26, samples=20 00:39:14.979 lat (msec) : 250=87.66%, 500=12.34% 00:39:14.979 cpu : usr=98.47%, sys=1.13%, ctx=21, majf=0, minf=9 00:39:14.979 IO depths : 1=0.6%, 2=1.7%, 4=8.8%, 8=76.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=89.4%, 8=5.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736091: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=58, BW=236KiB/s (242kB/s)(2368KiB/10034msec) 00:39:14.979 slat (usec): min=11, max=102, avg=64.42, stdev=20.35 00:39:14.979 clat (msec): min=96, max=374, avg=270.63, stdev=50.61 00:39:14.979 lat (msec): min=96, max=374, avg=270.70, stdev=50.63 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 97], 5.00th=[ 159], 10.00th=[ 201], 20.00th=[ 245], 00:39:14.979 | 30.00th=[ 255], 40.00th=[ 271], 50.00th=[ 292], 60.00th=[ 296], 00:39:14.979 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 326], 00:39:14.979 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:39:14.979 | 99.99th=[ 376] 00:39:14.979 bw ( KiB/s): min= 128, max= 368, per=3.79%, avg=230.40, stdev=62.60, samples=20 00:39:14.979 iops : min= 32, max= 92, avg=57.60, stdev=15.65, samples=20 00:39:14.979 lat (msec) : 100=2.70%, 250=21.79%, 500=75.51% 00:39:14.979 cpu : usr=98.50%, sys=1.05%, ctx=16, majf=0, minf=9 00:39:14.979 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736092: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10082msec) 00:39:14.979 slat (usec): min=8, max=102, avg=60.10, stdev=26.06 00:39:14.979 clat (msec): min=165, max=417, avg=285.82, stdev=37.35 00:39:14.979 lat (msec): min=165, max=417, avg=285.88, stdev=37.35 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 184], 5.00th=[ 230], 10.00th=[ 241], 20.00th=[ 255], 00:39:14.979 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 300], 00:39:14.979 | 70.00th=[ 305], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 351], 00:39:14.979 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:39:14.979 | 99.99th=[ 418] 00:39:14.979 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=58.59, samples=20 00:39:14.979 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:39:14.979 lat (msec) : 250=14.29%, 500=85.71% 00:39:14.979 cpu : usr=98.28%, sys=1.28%, ctx=15, majf=0, minf=9 00:39:14.979 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=1736093: Mon Oct 7 10:00:08 2024 00:39:14.979 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10084msec) 00:39:14.979 slat (nsec): min=4127, max=45093, avg=15535.65, stdev=6151.40 00:39:14.979 clat (msec): min=163, max=417, avg=286.23, stdev=44.57 00:39:14.979 lat (msec): min=163, max=417, avg=286.24, stdev=44.57 00:39:14.979 clat percentiles (msec): 00:39:14.979 | 1.00th=[ 163], 5.00th=[ 203], 10.00th=[ 234], 20.00th=[ 253], 00:39:14.979 | 30.00th=[ 266], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:39:14.979 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 351], 00:39:14.979 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 418], 00:39:14.979 | 99.99th=[ 418] 00:39:14.979 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=60.18, samples=20 00:39:14.979 iops : min= 32, max= 64, avg=54.40, stdev=15.05, samples=20 00:39:14.979 lat (msec) : 250=12.86%, 500=87.14% 00:39:14.979 cpu : usr=98.43%, sys=1.15%, ctx=9, majf=0, minf=9 00:39:14.979 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:39:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.979 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename1: (groupid=0, jobs=1): err= 0: pid=1736094: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10096msec) 00:39:14.980 slat (usec): min=7, max=102, avg=55.02, stdev=26.24 00:39:14.980 clat (msec): min=161, max=394, avg=278.29, stdev=42.46 00:39:14.980 lat (msec): min=161, max=395, avg=278.34, stdev=42.47 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 163], 5.00th=[ 197], 10.00th=[ 209], 20.00th=[ 253], 00:39:14.980 | 30.00th=[ 262], 40.00th=[ 275], 50.00th=[ 292], 60.00th=[ 296], 00:39:14.980 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 330], 00:39:14.980 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:39:14.980 | 99.99th=[ 397] 00:39:14.980 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=224.00, stdev=69.26, samples=20 00:39:14.980 iops : min= 32, max= 96, avg=56.00, stdev=17.31, samples=20 00:39:14.980 lat (msec) : 250=19.44%, 500=80.56% 00:39:14.980 cpu : usr=98.24%, sys=1.31%, ctx=16, majf=0, minf=9 00:39:14.980 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736095: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10094msec) 00:39:14.980 slat (nsec): min=5104, max=94924, avg=17048.49, stdev=7155.20 00:39:14.980 clat (msec): min=164, max=441, avg=288.21, stdev=38.18 00:39:14.980 lat (msec): min=164, max=441, avg=288.23, stdev=38.18 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 174], 5.00th=[ 234], 10.00th=[ 249], 20.00th=[ 262], 00:39:14.980 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 300], 00:39:14.980 | 70.00th=[ 305], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 363], 00:39:14.980 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 443], 99.95th=[ 443], 00:39:14.980 | 99.99th=[ 443] 00:39:14.980 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=58.59, samples=20 00:39:14.980 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:39:14.980 lat (msec) : 250=11.79%, 500=88.21% 00:39:14.980 cpu : usr=98.18%, sys=1.31%, ctx=39, majf=0, minf=9 00:39:14.980 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736096: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=72, BW=292KiB/s (299kB/s)(2944KiB/10084msec) 00:39:14.980 slat (nsec): min=8383, max=99920, avg=28818.24, stdev=27154.46 00:39:14.980 clat (msec): min=111, max=373, avg=218.96, stdev=34.06 00:39:14.980 lat (msec): min=111, max=373, avg=218.99, stdev=34.08 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 165], 5.00th=[ 171], 10.00th=[ 192], 20.00th=[ 197], 00:39:14.980 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 215], 00:39:14.980 | 70.00th=[ 228], 80.00th=[ 251], 90.00th=[ 266], 95.00th=[ 279], 00:39:14.980 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:39:14.980 | 99.99th=[ 372] 00:39:14.980 bw ( KiB/s): min= 128, max= 384, per=4.75%, avg=288.00, stdev=64.84, samples=20 00:39:14.980 iops : min= 32, max= 96, avg=72.00, stdev=16.21, samples=20 00:39:14.980 lat (msec) : 250=79.62%, 500=20.38% 00:39:14.980 cpu : usr=98.34%, sys=1.23%, ctx=11, majf=0, minf=9 00:39:14.980 IO depths : 1=1.2%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736097: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=57, BW=228KiB/s (233kB/s)(2304KiB/10105msec) 00:39:14.980 slat (usec): min=11, max=105, avg=73.66, stdev=14.26 00:39:14.980 clat (msec): min=170, max=411, avg=280.04, stdev=35.29 00:39:14.980 lat (msec): min=170, max=411, avg=280.11, stdev=35.30 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 171], 5.00th=[ 211], 10.00th=[ 243], 20.00th=[ 253], 00:39:14.980 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 296], 00:39:14.980 | 70.00th=[ 305], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:39:14.980 | 99.00th=[ 334], 99.50th=[ 405], 99.90th=[ 414], 99.95th=[ 414], 00:39:14.980 | 99.99th=[ 414] 00:39:14.980 bw ( KiB/s): min= 128, max= 256, per=3.68%, avg=224.00, stdev=55.18, samples=20 00:39:14.980 iops : min= 32, max= 64, avg=56.00, stdev=13.80, samples=20 00:39:14.980 lat (msec) : 250=17.01%, 500=82.99% 00:39:14.980 cpu : usr=98.29%, sys=1.28%, ctx=9, majf=0, minf=9 00:39:14.980 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736098: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=85, BW=341KiB/s (349kB/s)(3448KiB/10125msec) 00:39:14.980 slat (nsec): min=3901, max=97145, avg=17115.58, stdev=15081.67 00:39:14.980 clat (msec): min=28, max=318, avg=187.50, stdev=38.57 00:39:14.980 lat (msec): min=28, max=318, avg=187.52, stdev=38.57 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 29], 5.00th=[ 86], 10.00th=[ 165], 20.00th=[ 171], 00:39:14.980 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:39:14.980 | 70.00th=[ 203], 80.00th=[ 209], 90.00th=[ 224], 95.00th=[ 230], 00:39:14.980 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 317], 99.95th=[ 317], 00:39:14.980 | 99.99th=[ 317] 00:39:14.980 bw ( KiB/s): min= 256, max= 512, per=5.59%, avg=339.20, stdev=68.59, samples=20 00:39:14.980 iops : min= 64, max= 128, avg=84.80, stdev=17.15, samples=20 00:39:14.980 lat (msec) : 50=3.71%, 100=1.86%, 250=94.20%, 500=0.23% 00:39:14.980 cpu : usr=98.26%, sys=1.20%, ctx=50, majf=0, minf=9 00:39:14.980 IO depths : 1=0.8%, 2=7.1%, 4=25.1%, 8=55.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736099: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=57, BW=228KiB/s (233kB/s)(2304KiB/10105msec) 00:39:14.980 slat (usec): min=6, max=112, avg=69.14, stdev=18.29 00:39:14.980 clat (msec): min=158, max=411, avg=280.09, stdev=39.03 00:39:14.980 lat (msec): min=158, max=412, avg=280.16, stdev=39.04 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 171], 5.00th=[ 211], 10.00th=[ 239], 20.00th=[ 253], 00:39:14.980 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 296], 00:39:14.980 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 326], 00:39:14.980 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:39:14.980 | 99.99th=[ 414] 00:39:14.980 bw ( KiB/s): min= 128, max= 256, per=3.68%, avg=224.00, stdev=53.45, samples=20 00:39:14.980 iops : min= 32, max= 64, avg=56.00, stdev=13.36, samples=20 00:39:14.980 lat (msec) : 250=17.53%, 500=82.47% 00:39:14.980 cpu : usr=98.34%, sys=1.22%, ctx=12, majf=0, minf=9 00:39:14.980 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736100: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=74, BW=299KiB/s (306kB/s)(3024KiB/10105msec) 00:39:14.980 slat (nsec): min=5282, max=74004, avg=16177.58, stdev=10128.85 00:39:14.980 clat (msec): min=106, max=345, avg=213.50, stdev=40.23 00:39:14.980 lat (msec): min=106, max=345, avg=213.52, stdev=40.23 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 107], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 184], 00:39:14.980 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 209], 00:39:14.980 | 70.00th=[ 226], 80.00th=[ 245], 90.00th=[ 271], 95.00th=[ 296], 00:39:14.980 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:39:14.980 | 99.99th=[ 347] 00:39:14.980 bw ( KiB/s): min= 128, max= 384, per=4.88%, avg=296.00, stdev=58.85, samples=20 00:39:14.980 iops : min= 32, max= 96, avg=74.00, stdev=14.71, samples=20 00:39:14.980 lat (msec) : 250=81.22%, 500=18.78% 00:39:14.980 cpu : usr=98.38%, sys=1.13%, ctx=35, majf=0, minf=9 00:39:14.980 IO depths : 1=1.3%, 2=4.2%, 4=14.7%, 8=68.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:39:14.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.980 issued rwts: total=756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.980 filename2: (groupid=0, jobs=1): err= 0: pid=1736101: Mon Oct 7 10:00:08 2024 00:39:14.980 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10082msec) 00:39:14.980 slat (usec): min=26, max=106, avg=76.57, stdev=11.87 00:39:14.980 clat (msec): min=156, max=432, avg=287.39, stdev=42.97 00:39:14.980 lat (msec): min=156, max=432, avg=287.47, stdev=42.97 00:39:14.980 clat percentiles (msec): 00:39:14.980 | 1.00th=[ 174], 5.00th=[ 199], 10.00th=[ 241], 20.00th=[ 255], 00:39:14.980 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:39:14.980 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 351], 00:39:14.980 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:39:14.980 | 99.99th=[ 430] 00:39:14.980 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=55.28, samples=20 00:39:14.980 iops : min= 32, max= 64, avg=54.40, stdev=13.82, samples=20 00:39:14.980 lat (msec) : 250=13.57%, 500=86.43% 00:39:14.981 cpu : usr=98.16%, sys=1.38%, ctx=15, majf=0, minf=9 00:39:14.981 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:39:14.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.981 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.981 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.981 filename2: (groupid=0, jobs=1): err= 0: pid=1736102: Mon Oct 7 10:00:08 2024 00:39:14.981 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10084msec) 00:39:14.981 slat (usec): min=21, max=108, avg=71.40, stdev=11.49 00:39:14.981 clat (msec): min=170, max=434, avg=287.45, stdev=39.95 00:39:14.981 lat (msec): min=170, max=434, avg=287.52, stdev=39.95 00:39:14.981 clat percentiles (msec): 00:39:14.981 | 1.00th=[ 171], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 253], 00:39:14.981 | 30.00th=[ 264], 40.00th=[ 279], 50.00th=[ 296], 60.00th=[ 296], 00:39:14.981 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 334], 00:39:14.981 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:39:14.981 | 99.99th=[ 435] 00:39:14.981 bw ( KiB/s): min= 128, max= 256, per=3.58%, avg=217.60, stdev=60.18, samples=20 00:39:14.981 iops : min= 32, max= 64, avg=54.40, stdev=15.05, samples=20 00:39:14.981 lat (msec) : 250=11.43%, 500=88.57% 00:39:14.981 cpu : usr=98.45%, sys=1.11%, ctx=15, majf=0, minf=9 00:39:14.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:14.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:14.981 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:14.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:14.981 00:39:14.981 Run status group 0 (all jobs): 00:39:14.981 READ: bw=6065KiB/s (6211kB/s), 222KiB/s-341KiB/s (227kB/s-349kB/s), io=60.0MiB (62.9MB), run=10034-10125msec 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 bdev_null0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 [2024-10-07 10:00:08.873974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 bdev_null1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:14.981 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:14.981 { 00:39:14.981 "params": { 00:39:14.981 "name": "Nvme$subsystem", 00:39:14.981 "trtype": "$TEST_TRANSPORT", 00:39:14.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.981 "adrfam": "ipv4", 00:39:14.981 "trsvcid": "$NVMF_PORT", 00:39:14.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.982 "hdgst": ${hdgst:-false}, 00:39:14.982 "ddgst": ${ddgst:-false} 00:39:14.982 }, 00:39:14.982 "method": "bdev_nvme_attach_controller" 00:39:14.982 } 00:39:14.982 EOF 00:39:14.982 )") 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:14.982 { 00:39:14.982 "params": { 00:39:14.982 "name": "Nvme$subsystem", 00:39:14.982 "trtype": "$TEST_TRANSPORT", 00:39:14.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.982 "adrfam": "ipv4", 00:39:14.982 "trsvcid": "$NVMF_PORT", 00:39:14.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.982 "hdgst": ${hdgst:-false}, 00:39:14.982 "ddgst": ${ddgst:-false} 00:39:14.982 }, 00:39:14.982 "method": "bdev_nvme_attach_controller" 00:39:14.982 } 00:39:14.982 EOF 00:39:14.982 )") 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:14.982 "params": { 00:39:14.982 "name": "Nvme0", 00:39:14.982 "trtype": "tcp", 00:39:14.982 "traddr": "10.0.0.2", 00:39:14.982 "adrfam": "ipv4", 00:39:14.982 "trsvcid": "4420", 00:39:14.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.982 "hdgst": false, 00:39:14.982 "ddgst": false 00:39:14.982 }, 00:39:14.982 "method": "bdev_nvme_attach_controller" 00:39:14.982 },{ 00:39:14.982 "params": { 00:39:14.982 "name": "Nvme1", 00:39:14.982 "trtype": "tcp", 00:39:14.982 "traddr": "10.0.0.2", 00:39:14.982 "adrfam": "ipv4", 00:39:14.982 "trsvcid": "4420", 00:39:14.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:14.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:14.982 "hdgst": false, 00:39:14.982 "ddgst": false 00:39:14.982 }, 00:39:14.982 "method": "bdev_nvme_attach_controller" 00:39:14.982 }' 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:14.982 10:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.982 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:14.982 ... 00:39:14.982 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:14.982 ... 00:39:14.982 fio-3.35 00:39:14.982 Starting 4 threads 00:39:20.238 00:39:20.238 filename0: (groupid=0, jobs=1): err= 0: pid=1738099: Mon Oct 7 10:00:14 2024 00:39:20.238 read: IOPS=1901, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5003msec) 00:39:20.238 slat (nsec): min=3992, max=71711, avg=21213.37, stdev=8986.90 00:39:20.238 clat (usec): min=876, max=7639, avg=4130.60, stdev=597.52 00:39:20.238 lat (usec): min=904, max=7659, avg=4151.81, stdev=597.39 00:39:20.238 clat percentiles (usec): 00:39:20.238 | 1.00th=[ 2507], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3851], 00:39:20.238 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4228], 00:39:20.238 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5080], 00:39:20.238 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7373], 00:39:20.238 | 99.99th=[ 7635] 00:39:20.238 bw ( KiB/s): min=14669, max=16064, per=25.24%, avg=15207.70, stdev=511.86, samples=10 00:39:20.238 iops : min= 1833, max= 2008, avg=1900.90, stdev=64.06, samples=10 00:39:20.238 lat (usec) : 1000=0.03% 00:39:20.238 lat (msec) : 2=0.44%, 4=25.88%, 10=73.65% 00:39:20.238 cpu : usr=95.18%, sys=4.20%, ctx=17, majf=0, minf=0 00:39:20.238 IO depths : 1=0.3%, 2=17.0%, 4=56.1%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.238 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.238 issued rwts: total=9511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.238 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.239 filename0: (groupid=0, jobs=1): err= 0: pid=1738100: Mon Oct 7 10:00:14 2024 00:39:20.239 read: IOPS=1893, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5004msec) 00:39:20.239 slat (nsec): min=3971, max=64719, avg=15388.25, stdev=8672.98 00:39:20.239 clat (usec): min=755, max=7685, avg=4175.54, stdev=527.46 00:39:20.239 lat (usec): min=770, max=7697, avg=4190.92, stdev=527.64 00:39:20.239 clat percentiles (usec): 00:39:20.239 | 1.00th=[ 2671], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3916], 00:39:20.239 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:39:20.239 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5014], 00:39:20.239 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7177], 99.95th=[ 7308], 00:39:20.239 | 99.99th=[ 7701] 00:39:20.239 bw ( KiB/s): min=14752, max=15536, per=25.14%, avg=15147.20, stdev=269.22, samples=10 00:39:20.239 iops : min= 1844, max= 1942, avg=1893.40, stdev=33.65, samples=10 00:39:20.239 lat (usec) : 1000=0.04% 00:39:20.239 lat (msec) : 2=0.38%, 4=23.18%, 10=76.40% 00:39:20.239 cpu : usr=94.84%, sys=4.68%, ctx=6, majf=0, minf=0 00:39:20.239 IO depths : 1=0.4%, 2=8.2%, 4=64.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 issued rwts: total=9475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.239 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.239 filename1: (groupid=0, jobs=1): err= 0: pid=1738101: Mon Oct 7 10:00:14 2024 00:39:20.239 read: IOPS=1843, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5001msec) 00:39:20.239 slat (nsec): min=3906, max=75984, avg=19582.15, stdev=10702.95 00:39:20.239 clat (usec): min=711, max=8100, avg=4269.89, stdev=603.34 00:39:20.239 lat (usec): min=726, max=8117, avg=4289.47, stdev=602.52 00:39:20.239 clat percentiles (usec): 00:39:20.239 | 1.00th=[ 2573], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4015], 00:39:20.239 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:39:20.239 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5407], 00:39:20.239 | 99.00th=[ 6587], 99.50th=[ 7046], 99.90th=[ 7373], 99.95th=[ 7504], 00:39:20.239 | 99.99th=[ 8094] 00:39:20.239 bw ( KiB/s): min=14256, max=15200, per=24.51%, avg=14767.67, stdev=299.97, samples=9 00:39:20.239 iops : min= 1782, max= 1900, avg=1845.89, stdev=37.50, samples=9 00:39:20.239 lat (usec) : 750=0.01%, 1000=0.04% 00:39:20.239 lat (msec) : 2=0.37%, 4=17.02%, 10=82.56% 00:39:20.239 cpu : usr=95.34%, sys=4.20%, ctx=7, majf=0, minf=9 00:39:20.239 IO depths : 1=0.4%, 2=13.0%, 4=60.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 issued rwts: total=9219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.239 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.239 filename1: (groupid=0, jobs=1): err= 0: pid=1738102: Mon Oct 7 10:00:14 2024 00:39:20.239 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:39:20.239 slat (nsec): min=4507, max=79601, avg=18216.99, stdev=10463.50 00:39:20.239 clat (usec): min=858, max=7678, avg=4156.63, stdev=545.98 00:39:20.239 lat (usec): min=877, max=7695, avg=4174.84, stdev=545.96 00:39:20.239 clat percentiles (usec): 00:39:20.239 | 1.00th=[ 2573], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3884], 00:39:20.239 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:39:20.239 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5014], 00:39:20.239 | 99.00th=[ 5997], 99.50th=[ 6652], 99.90th=[ 7373], 99.95th=[ 7373], 00:39:20.239 | 99.99th=[ 7701] 00:39:20.239 bw ( KiB/s): min=14784, max=15584, per=25.23%, avg=15201.78, stdev=263.33, samples=9 00:39:20.239 iops : min= 1848, max= 1948, avg=1900.22, stdev=32.92, samples=9 00:39:20.239 lat (usec) : 1000=0.06% 00:39:20.239 lat (msec) : 2=0.32%, 4=25.19%, 10=74.43% 00:39:20.239 cpu : usr=94.86%, sys=4.64%, ctx=8, majf=0, minf=10 00:39:20.239 IO depths : 1=0.8%, 2=14.3%, 4=58.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.239 issued rwts: total=9481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.239 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.239 00:39:20.239 Run status group 0 (all jobs): 00:39:20.239 READ: bw=58.8MiB/s (61.7MB/s), 14.4MiB/s-14.9MiB/s (15.1MB/s-15.6MB/s), io=294MiB (309MB), run=5001-5004msec 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 00:39:20.498 real 0m24.396s 00:39:20.498 user 4m36.161s 00:39:20.498 sys 0m5.724s 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 ************************************ 00:39:20.498 END TEST fio_dif_rand_params 00:39:20.498 ************************************ 00:39:20.498 10:00:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:20.498 10:00:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:20.498 10:00:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 ************************************ 00:39:20.498 START TEST fio_dif_digest 00:39:20.498 ************************************ 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 bdev_null0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.498 [2024-10-07 10:00:15.213111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.498 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:20.498 { 00:39:20.498 "params": { 00:39:20.499 "name": "Nvme$subsystem", 00:39:20.499 "trtype": "$TEST_TRANSPORT", 00:39:20.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.499 "adrfam": "ipv4", 00:39:20.499 "trsvcid": "$NVMF_PORT", 00:39:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.499 "hdgst": ${hdgst:-false}, 00:39:20.499 "ddgst": ${ddgst:-false} 00:39:20.499 }, 00:39:20.499 "method": "bdev_nvme_attach_controller" 00:39:20.499 } 00:39:20.499 EOF 00:39:20.499 )") 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:20.499 "params": { 00:39:20.499 "name": "Nvme0", 00:39:20.499 "trtype": "tcp", 00:39:20.499 "traddr": "10.0.0.2", 00:39:20.499 "adrfam": "ipv4", 00:39:20.499 "trsvcid": "4420", 00:39:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:20.499 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:20.499 "hdgst": true, 00:39:20.499 "ddgst": true 00:39:20.499 }, 00:39:20.499 "method": "bdev_nvme_attach_controller" 00:39:20.499 }' 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:20.499 10:00:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.758 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:20.758 ... 00:39:20.758 fio-3.35 00:39:20.758 Starting 3 threads 00:39:33.023 00:39:33.023 filename0: (groupid=0, jobs=1): err= 0: pid=1738973: Mon Oct 7 10:00:26 2024 00:39:33.023 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(222MiB/10045msec) 00:39:33.023 slat (nsec): min=5794, max=90914, avg=21382.71, stdev=3937.57 00:39:33.023 clat (msec): min=9, max=107, avg=16.91, stdev= 6.90 00:39:33.023 lat (msec): min=9, max=107, avg=16.93, stdev= 6.90 00:39:33.023 clat percentiles (msec): 00:39:33.023 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:39:33.023 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:39:33.023 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 18], 95.00th=[ 19], 00:39:33.023 | 99.00th=[ 24], 99.50th=[ 89], 99.90th=[ 106], 99.95th=[ 108], 00:39:33.023 | 99.99th=[ 108] 00:39:33.023 bw ( KiB/s): min= 6912, max=25344, per=32.13%, avg=22720.00, stdev=3772.37, samples=20 00:39:33.023 iops : min= 54, max= 198, avg=177.50, stdev=29.47, samples=20 00:39:33.023 lat (msec) : 10=0.06%, 20=98.82%, 50=0.23%, 100=0.68%, 250=0.23% 00:39:33.023 cpu : usr=94.99%, sys=4.45%, ctx=15, majf=0, minf=150 00:39:33.023 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.023 filename0: (groupid=0, jobs=1): err= 0: pid=1738974: Mon Oct 7 10:00:26 2024 00:39:33.023 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10046msec) 00:39:33.023 slat (nsec): min=6087, max=54840, avg=23869.83, stdev=4755.33 00:39:33.023 clat (usec): min=8047, max=92825, avg=16300.21, stdev=6376.01 00:39:33.023 lat (usec): min=8072, max=92851, avg=16324.08, stdev=6376.05 00:39:33.023 clat percentiles (usec): 00:39:33.023 | 1.00th=[12911], 5.00th=[13960], 10.00th=[14353], 20.00th=[14877], 00:39:33.023 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:39:33.023 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:39:33.023 | 99.00th=[31851], 99.50th=[82314], 99.90th=[92799], 99.95th=[92799], 00:39:33.023 | 99.99th=[92799] 00:39:33.023 bw ( KiB/s): min= 7168, max=26368, per=33.32%, avg=23564.80, stdev=3916.47, samples=20 00:39:33.023 iops : min= 56, max= 206, avg=184.10, stdev=30.60, samples=20 00:39:33.023 lat (msec) : 10=0.60%, 20=98.26%, 50=0.22%, 100=0.92% 00:39:33.023 cpu : usr=94.87%, sys=4.18%, ctx=51, majf=0, minf=102 00:39:33.023 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.023 filename0: (groupid=0, jobs=1): err= 0: pid=1738975: Mon Oct 7 10:00:26 2024 00:39:33.023 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(241MiB/10046msec) 00:39:33.023 slat (nsec): min=7521, max=56969, avg=22015.08, stdev=5010.74 00:39:33.023 clat (usec): min=11086, max=80761, avg=15567.68, stdev=5789.71 00:39:33.023 lat (usec): min=11111, max=80818, avg=15589.70, stdev=5790.94 00:39:33.023 clat percentiles (usec): 00:39:33.023 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:39:33.023 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:39:33.023 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:39:33.023 | 99.00th=[53740], 99.50th=[69731], 99.90th=[80217], 99.95th=[81265], 00:39:33.023 | 99.99th=[81265] 00:39:33.023 bw ( KiB/s): min= 8192, max=27392, per=34.90%, avg=24678.40, stdev=4006.95, samples=20 00:39:33.023 iops : min= 64, max= 214, avg=192.80, stdev=31.30, samples=20 00:39:33.023 lat (msec) : 20=98.65%, 50=0.26%, 100=1.09% 00:39:33.023 cpu : usr=94.93%, sys=4.14%, ctx=72, majf=0, minf=186 00:39:33.023 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.023 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.023 00:39:33.023 Run status group 0 (all jobs): 00:39:33.023 READ: bw=69.1MiB/s (72.4MB/s), 22.1MiB/s-24.0MiB/s (23.2MB/s-25.2MB/s), io=694MiB (727MB), run=10045-10046msec 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.023 00:39:33.023 real 0m11.474s 00:39:33.023 user 0m29.979s 00:39:33.023 sys 0m1.597s 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:33.023 10:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.023 ************************************ 00:39:33.023 END TEST fio_dif_digest 00:39:33.023 ************************************ 00:39:33.023 10:00:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:33.023 10:00:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.023 rmmod nvme_tcp 00:39:33.023 rmmod nvme_fabrics 00:39:33.023 rmmod nvme_keyring 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1732187 ']' 00:39:33.023 10:00:26 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1732187 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1732187 ']' 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1732187 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1732187 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1732187' 00:39:33.023 killing process with pid 1732187 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1732187 00:39:33.023 10:00:26 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1732187 00:39:33.023 10:00:27 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:39:33.023 10:00:27 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:33.963 Waiting for block devices as requested 00:39:33.963 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:39:33.963 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:34.222 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:34.222 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:34.222 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:34.222 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:34.481 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:34.481 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:34.481 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:34.481 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:34.741 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:34.741 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:34.741 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.000 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.000 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.000 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.000 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.261 10:00:29 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.261 10:00:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:35.261 10:00:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.165 10:00:31 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.165 00:39:37.165 real 1m9.319s 00:39:37.165 user 6m35.979s 00:39:37.165 sys 0m17.705s 00:39:37.165 10:00:31 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:37.165 10:00:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.165 ************************************ 00:39:37.165 END TEST nvmf_dif 00:39:37.165 ************************************ 00:39:37.165 10:00:31 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.165 10:00:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:37.165 10:00:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:37.165 10:00:31 -- common/autotest_common.sh@10 -- # set +x 00:39:37.165 ************************************ 00:39:37.165 START TEST nvmf_abort_qd_sizes 00:39:37.165 ************************************ 00:39:37.165 10:00:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.424 * Looking for test storage... 00:39:37.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:37.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.425 --rc genhtml_branch_coverage=1 00:39:37.425 --rc genhtml_function_coverage=1 00:39:37.425 --rc genhtml_legend=1 00:39:37.425 --rc geninfo_all_blocks=1 00:39:37.425 --rc geninfo_unexecuted_blocks=1 00:39:37.425 00:39:37.425 ' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:37.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.425 --rc genhtml_branch_coverage=1 00:39:37.425 --rc genhtml_function_coverage=1 00:39:37.425 --rc genhtml_legend=1 00:39:37.425 --rc geninfo_all_blocks=1 00:39:37.425 --rc geninfo_unexecuted_blocks=1 00:39:37.425 00:39:37.425 ' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:37.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.425 --rc genhtml_branch_coverage=1 00:39:37.425 --rc genhtml_function_coverage=1 00:39:37.425 --rc genhtml_legend=1 00:39:37.425 --rc geninfo_all_blocks=1 00:39:37.425 --rc geninfo_unexecuted_blocks=1 00:39:37.425 00:39:37.425 ' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:37.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.425 --rc genhtml_branch_coverage=1 00:39:37.425 --rc genhtml_function_coverage=1 00:39:37.425 --rc genhtml_legend=1 00:39:37.425 --rc geninfo_all_blocks=1 00:39:37.425 --rc geninfo_unexecuted_blocks=1 00:39:37.425 00:39:37.425 ' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:37.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.425 10:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.954 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:39.955 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:39.955 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:39.955 Found net devices under 0000:84:00.0: cvl_0_0 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:39.955 Found net devices under 0000:84:00.1: cvl_0_1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.955 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:39:40.213 00:39:40.213 --- 10.0.0.2 ping statistics --- 00:39:40.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.213 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:39:40.213 00:39:40.213 --- 10.0.0.1 ping statistics --- 00:39:40.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.213 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:39:40.213 10:00:34 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:41.585 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.585 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:41.585 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.585 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.585 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.844 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.844 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.844 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.844 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.844 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:42.779 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1743932 00:39:42.779 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1743932 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1743932 ']' 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:42.780 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.780 [2024-10-07 10:00:37.581973] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:39:42.780 [2024-10-07 10:00:37.582058] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:43.038 [2024-10-07 10:00:37.660181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:43.038 [2024-10-07 10:00:37.788608] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:43.038 [2024-10-07 10:00:37.788683] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:43.038 [2024-10-07 10:00:37.788699] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:43.038 [2024-10-07 10:00:37.788713] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:43.039 [2024-10-07 10:00:37.788725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:43.039 [2024-10-07 10:00:37.790686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.039 [2024-10-07 10:00:37.790786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.039 [2024-10-07 10:00:37.790843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:43.039 [2024-10-07 10:00:37.790846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:43.297 10:00:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.297 ************************************ 00:39:43.297 START TEST spdk_target_abort 00:39:43.297 ************************************ 00:39:43.297 10:00:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:39:43.297 10:00:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:43.297 10:00:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:39:43.297 10:00:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.297 10:00:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.576 spdk_targetn1 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.576 [2024-10-07 10:00:40.842100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.576 [2024-10-07 10:00:40.874381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.576 10:00:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.857 Initializing NVMe Controllers 00:39:49.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.857 Initialization complete. Launching workers. 00:39:49.857 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11809, failed: 0 00:39:49.857 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1452, failed to submit 10357 00:39:49.857 success 703, unsuccessful 749, failed 0 00:39:49.857 10:00:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:49.857 10:00:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:53.138 Initializing NVMe Controllers 00:39:53.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:53.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:53.138 Initialization complete. Launching workers. 00:39:53.138 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8482, failed: 0 00:39:53.138 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 7211 00:39:53.138 success 330, unsuccessful 941, failed 0 00:39:53.138 10:00:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:53.138 10:00:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:56.420 Initializing NVMe Controllers 00:39:56.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:56.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:56.420 Initialization complete. Launching workers. 00:39:56.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31381, failed: 0 00:39:56.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2816, failed to submit 28565 00:39:56.420 success 519, unsuccessful 2297, failed 0 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.420 10:00:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1743932 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1743932 ']' 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1743932 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1743932 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1743932' 00:39:57.353 killing process with pid 1743932 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1743932 00:39:57.353 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1743932 00:39:57.920 00:39:57.920 real 0m14.447s 00:39:57.920 user 0m54.479s 00:39:57.920 sys 0m3.090s 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:57.920 ************************************ 00:39:57.920 END TEST spdk_target_abort 00:39:57.920 ************************************ 00:39:57.920 10:00:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:57.920 10:00:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:57.920 10:00:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:57.920 10:00:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:57.920 ************************************ 00:39:57.920 START TEST kernel_target_abort 00:39:57.920 ************************************ 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:57.920 10:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:59.323 Waiting for block devices as requested 00:39:59.323 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:39:59.323 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:59.582 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:59.582 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:59.582 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:59.582 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:59.841 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:59.841 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:59.841 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:59.841 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:00.101 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:00.101 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:00.360 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:00.360 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:00.360 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:00.619 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:00.619 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:00.619 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:00.878 No valid GPT data, bailing 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:40:00.878 00:40:00.878 Discovery Log Number of Records 2, Generation counter 2 00:40:00.878 =====Discovery Log Entry 0====== 00:40:00.878 trtype: tcp 00:40:00.878 adrfam: ipv4 00:40:00.878 subtype: current discovery subsystem 00:40:00.878 treq: not specified, sq flow control disable supported 00:40:00.878 portid: 1 00:40:00.878 trsvcid: 4420 00:40:00.878 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:00.878 traddr: 10.0.0.1 00:40:00.878 eflags: none 00:40:00.878 sectype: none 00:40:00.878 =====Discovery Log Entry 1====== 00:40:00.878 trtype: tcp 00:40:00.878 adrfam: ipv4 00:40:00.878 subtype: nvme subsystem 00:40:00.878 treq: not specified, sq flow control disable supported 00:40:00.878 portid: 1 00:40:00.878 trsvcid: 4420 00:40:00.878 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:00.878 traddr: 10.0.0.1 00:40:00.878 eflags: none 00:40:00.878 sectype: none 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:00.878 10:00:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:04.164 Initializing NVMe Controllers 00:40:04.164 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:04.164 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:04.164 Initialization complete. Launching workers. 00:40:04.164 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46174, failed: 0 00:40:04.164 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46174, failed to submit 0 00:40:04.164 success 0, unsuccessful 46174, failed 0 00:40:04.164 10:00:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:04.164 10:00:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:07.516 Initializing NVMe Controllers 00:40:07.516 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:07.516 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:07.516 Initialization complete. Launching workers. 00:40:07.516 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81197, failed: 0 00:40:07.516 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20466, failed to submit 60731 00:40:07.516 success 0, unsuccessful 20466, failed 0 00:40:07.516 10:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:07.516 10:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:10.799 Initializing NVMe Controllers 00:40:10.799 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:10.799 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:10.799 Initialization complete. Launching workers. 00:40:10.799 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77729, failed: 0 00:40:10.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19418, failed to submit 58311 00:40:10.799 success 0, unsuccessful 19418, failed 0 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:40:10.799 10:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:12.176 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:12.176 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:12.176 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:13.112 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:40:13.112 00:40:13.112 real 0m15.321s 00:40:13.112 user 0m6.935s 00:40:13.112 sys 0m3.819s 00:40:13.112 10:01:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:13.112 10:01:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:13.112 ************************************ 00:40:13.112 END TEST kernel_target_abort 00:40:13.112 ************************************ 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.112 rmmod nvme_tcp 00:40:13.112 rmmod nvme_fabrics 00:40:13.112 rmmod nvme_keyring 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1743932 ']' 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1743932 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1743932 ']' 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1743932 00:40:13.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1743932) - No such process 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1743932 is not found' 00:40:13.112 Process with pid 1743932 is not found 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:40:13.112 10:01:07 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:15.015 Waiting for block devices as requested 00:40:15.015 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:40:15.015 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:15.015 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:15.015 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:15.274 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:15.274 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:15.274 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:15.274 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:15.274 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:15.532 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:15.532 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:15.532 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:15.532 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:15.792 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:15.792 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:15.792 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:16.051 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:16.051 10:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.052 10:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:16.052 10:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.588 10:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:18.588 00:40:18.588 real 0m40.858s 00:40:18.588 user 1m4.201s 00:40:18.588 sys 0m11.385s 00:40:18.588 10:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:18.588 10:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:18.588 ************************************ 00:40:18.588 END TEST nvmf_abort_qd_sizes 00:40:18.588 ************************************ 00:40:18.588 10:01:12 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.588 10:01:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:18.588 10:01:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:18.588 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:40:18.588 ************************************ 00:40:18.588 START TEST keyring_file 00:40:18.588 ************************************ 00:40:18.588 10:01:12 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.588 * Looking for test storage... 00:40:18.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:18.588 10:01:12 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:18.588 10:01:12 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:40:18.588 10:01:12 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:18.588 10:01:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.588 --rc genhtml_branch_coverage=1 00:40:18.588 --rc genhtml_function_coverage=1 00:40:18.588 --rc genhtml_legend=1 00:40:18.588 --rc geninfo_all_blocks=1 00:40:18.588 --rc geninfo_unexecuted_blocks=1 00:40:18.588 00:40:18.588 ' 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.588 --rc genhtml_branch_coverage=1 00:40:18.588 --rc genhtml_function_coverage=1 00:40:18.588 --rc genhtml_legend=1 00:40:18.588 --rc geninfo_all_blocks=1 00:40:18.588 --rc geninfo_unexecuted_blocks=1 00:40:18.588 00:40:18.588 ' 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.588 --rc genhtml_branch_coverage=1 00:40:18.588 --rc genhtml_function_coverage=1 00:40:18.588 --rc genhtml_legend=1 00:40:18.588 --rc geninfo_all_blocks=1 00:40:18.588 --rc geninfo_unexecuted_blocks=1 00:40:18.588 00:40:18.588 ' 00:40:18.588 10:01:13 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.588 --rc genhtml_branch_coverage=1 00:40:18.588 --rc genhtml_function_coverage=1 00:40:18.588 --rc genhtml_legend=1 00:40:18.588 --rc geninfo_all_blocks=1 00:40:18.588 --rc geninfo_unexecuted_blocks=1 00:40:18.588 00:40:18.588 ' 00:40:18.588 10:01:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:18.588 10:01:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.589 10:01:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:18.589 10:01:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.589 10:01:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.589 10:01:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.589 10:01:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.589 10:01:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.589 10:01:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.589 10:01:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:18.589 10:01:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:18.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QWMgRnZgZK 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QWMgRnZgZK 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QWMgRnZgZK 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QWMgRnZgZK 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AldIWNQTfi 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:18.589 10:01:13 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AldIWNQTfi 00:40:18.589 10:01:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AldIWNQTfi 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AldIWNQTfi 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=1749714 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:18.589 10:01:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1749714 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1749714 ']' 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:18.589 10:01:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:18.589 [2024-10-07 10:01:13.376966] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:40:18.589 [2024-10-07 10:01:13.377064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749714 ] 00:40:18.848 [2024-10-07 10:01:13.470417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.848 [2024-10-07 10:01:13.597785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.107 10:01:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.107 10:01:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:40:19.107 10:01:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:19.107 10:01:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.107 10:01:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.107 [2024-10-07 10:01:13.899266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.107 null0 00:40:19.365 [2024-10-07 10:01:13.931327] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:19.365 [2024-10-07 10:01:13.931861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.365 10:01:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.365 [2024-10-07 10:01:13.959366] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:19.365 request: 00:40:19.365 { 00:40:19.365 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.365 "secure_channel": false, 00:40:19.365 "listen_address": { 00:40:19.365 "trtype": "tcp", 00:40:19.365 "traddr": "127.0.0.1", 00:40:19.365 "trsvcid": "4420" 00:40:19.365 }, 00:40:19.365 "method": "nvmf_subsystem_add_listener", 00:40:19.365 "req_id": 1 00:40:19.365 } 00:40:19.365 Got JSON-RPC error response 00:40:19.365 response: 00:40:19.365 { 00:40:19.365 "code": -32602, 00:40:19.365 "message": "Invalid parameters" 00:40:19.365 } 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:19.365 10:01:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=1749839 00:40:19.365 10:01:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:19.365 10:01:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1749839 /var/tmp/bperf.sock 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1749839 ']' 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:19.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:19.365 10:01:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.366 [2024-10-07 10:01:14.014146] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:40:19.366 [2024-10-07 10:01:14.014225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749839 ] 00:40:19.366 [2024-10-07 10:01:14.080778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.624 [2024-10-07 10:01:14.203265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.624 10:01:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.624 10:01:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:40:19.624 10:01:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:19.624 10:01:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:20.190 10:01:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AldIWNQTfi 00:40:20.190 10:01:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AldIWNQTfi 00:40:20.756 10:01:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:20.756 10:01:15 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:20.756 10:01:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:20.756 10:01:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:20.756 10:01:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.322 10:01:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QWMgRnZgZK == \/\t\m\p\/\t\m\p\.\Q\W\M\g\R\n\Z\g\Z\K ]] 00:40:21.322 10:01:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:21.322 10:01:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:21.322 10:01:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.322 10:01:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.322 10:01:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:21.580 10:01:16 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AldIWNQTfi == \/\t\m\p\/\t\m\p\.\A\l\d\I\W\N\Q\T\f\i ]] 00:40:21.580 10:01:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:21.580 10:01:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.580 10:01:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:21.580 10:01:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.580 10:01:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.580 10:01:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.839 10:01:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:21.839 10:01:16 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:21.839 10:01:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:21.839 10:01:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.839 10:01:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.839 10:01:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:21.839 10:01:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.406 10:01:16 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:22.406 10:01:16 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.406 10:01:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.664 [2024-10-07 10:01:17.467037] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:22.922 nvme0n1 00:40:22.922 10:01:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:22.923 10:01:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.923 10:01:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.923 10:01:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.923 10:01:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.923 10:01:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:23.181 10:01:17 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:23.181 10:01:17 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:23.181 10:01:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:23.181 10:01:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:23.181 10:01:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:23.181 10:01:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.181 10:01:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:23.439 10:01:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:23.439 10:01:18 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:23.698 Running I/O for 1 seconds... 00:40:24.632 8733.00 IOPS, 34.11 MiB/s 00:40:24.632 Latency(us) 00:40:24.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.633 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:24.633 nvme0n1 : 1.01 8783.27 34.31 0.00 0.00 14520.82 6747.78 24758.04 00:40:24.633 =================================================================================================================== 00:40:24.633 Total : 8783.27 34.31 0.00 0.00 14520.82 6747.78 24758.04 00:40:24.633 { 00:40:24.633 "results": [ 00:40:24.633 { 00:40:24.633 "job": "nvme0n1", 00:40:24.633 "core_mask": "0x2", 00:40:24.633 "workload": "randrw", 00:40:24.633 "percentage": 50, 00:40:24.633 "status": "finished", 00:40:24.633 "queue_depth": 128, 00:40:24.633 "io_size": 4096, 00:40:24.633 "runtime": 1.009078, 00:40:24.633 "iops": 8783.265515648938, 00:40:24.633 "mibps": 34.309630920503665, 00:40:24.633 "io_failed": 0, 00:40:24.633 "io_timeout": 0, 00:40:24.633 "avg_latency_us": 14520.821770740617, 00:40:24.633 "min_latency_us": 6747.780740740741, 00:40:24.633 "max_latency_us": 24758.044444444444 00:40:24.633 } 00:40:24.633 ], 00:40:24.633 "core_count": 1 00:40:24.633 } 00:40:24.633 10:01:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:24.633 10:01:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:25.199 10:01:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:25.199 10:01:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:25.199 10:01:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.199 10:01:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.199 10:01:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.199 10:01:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.458 10:01:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:25.458 10:01:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:25.458 10:01:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.458 10:01:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.458 10:01:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.458 10:01:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.458 10:01:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.392 10:01:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:26.392 10:01:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:26.392 10:01:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.392 10:01:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.651 [2024-10-07 10:01:21.236644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:26.651 [2024-10-07 10:01:21.237532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba7b90 (107): Transport endpoint is not connected 00:40:26.651 [2024-10-07 10:01:21.238524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba7b90 (9): Bad file descriptor 00:40:26.651 [2024-10-07 10:01:21.239522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:26.651 [2024-10-07 10:01:21.239545] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:26.651 [2024-10-07 10:01:21.239560] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:26.651 [2024-10-07 10:01:21.239577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:26.651 request: 00:40:26.651 { 00:40:26.651 "name": "nvme0", 00:40:26.651 "trtype": "tcp", 00:40:26.651 "traddr": "127.0.0.1", 00:40:26.651 "adrfam": "ipv4", 00:40:26.651 "trsvcid": "4420", 00:40:26.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:26.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:26.651 "prchk_reftag": false, 00:40:26.651 "prchk_guard": false, 00:40:26.651 "hdgst": false, 00:40:26.651 "ddgst": false, 00:40:26.651 "psk": "key1", 00:40:26.651 "allow_unrecognized_csi": false, 00:40:26.651 "method": "bdev_nvme_attach_controller", 00:40:26.651 "req_id": 1 00:40:26.651 } 00:40:26.651 Got JSON-RPC error response 00:40:26.651 response: 00:40:26.651 { 00:40:26.651 "code": -5, 00:40:26.651 "message": "Input/output error" 00:40:26.651 } 00:40:26.651 10:01:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:26.651 10:01:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:26.651 10:01:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:26.651 10:01:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:26.651 10:01:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:26.651 10:01:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:26.651 10:01:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:26.651 10:01:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:26.651 10:01:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.651 10:01:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:27.219 10:01:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:27.219 10:01:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:27.219 10:01:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:27.219 10:01:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:27.219 10:01:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:27.219 10:01:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:27.219 10:01:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:27.477 10:01:22 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:27.477 10:01:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:27.477 10:01:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:28.043 10:01:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:28.043 10:01:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:28.301 10:01:23 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:28.301 10:01:23 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:28.301 10:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.867 10:01:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:28.867 10:01:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.QWMgRnZgZK 00:40:28.867 10:01:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:28.867 10:01:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:28.867 10:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:29.433 [2024-10-07 10:01:24.137696] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QWMgRnZgZK': 0100660 00:40:29.433 [2024-10-07 10:01:24.137735] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:29.433 request: 00:40:29.433 { 00:40:29.433 "name": "key0", 00:40:29.433 "path": "/tmp/tmp.QWMgRnZgZK", 00:40:29.433 "method": "keyring_file_add_key", 00:40:29.433 "req_id": 1 00:40:29.433 } 00:40:29.433 Got JSON-RPC error response 00:40:29.433 response: 00:40:29.433 { 00:40:29.433 "code": -1, 00:40:29.433 "message": "Operation not permitted" 00:40:29.433 } 00:40:29.433 10:01:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:29.433 10:01:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:29.433 10:01:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:29.433 10:01:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:29.433 10:01:24 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.QWMgRnZgZK 00:40:29.433 10:01:24 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:29.433 10:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QWMgRnZgZK 00:40:29.997 10:01:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.QWMgRnZgZK 00:40:29.997 10:01:24 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:29.997 10:01:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:29.997 10:01:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.997 10:01:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.997 10:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.997 10:01:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:30.255 10:01:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:30.255 10:01:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.255 10:01:24 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.255 10:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.513 [2024-10-07 10:01:25.188563] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QWMgRnZgZK': No such file or directory 00:40:30.513 [2024-10-07 10:01:25.188622] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:30.513 [2024-10-07 10:01:25.188650] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:30.513 [2024-10-07 10:01:25.188665] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:30.513 [2024-10-07 10:01:25.188689] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:30.513 [2024-10-07 10:01:25.188703] bdev_nvme.c:6449:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:30.513 request: 00:40:30.513 { 00:40:30.513 "name": "nvme0", 00:40:30.513 "trtype": "tcp", 00:40:30.513 "traddr": "127.0.0.1", 00:40:30.513 "adrfam": "ipv4", 00:40:30.513 "trsvcid": "4420", 00:40:30.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.513 "prchk_reftag": false, 00:40:30.513 "prchk_guard": false, 00:40:30.513 "hdgst": false, 00:40:30.513 "ddgst": false, 00:40:30.513 "psk": "key0", 00:40:30.513 "allow_unrecognized_csi": false, 00:40:30.513 "method": "bdev_nvme_attach_controller", 00:40:30.513 "req_id": 1 00:40:30.513 } 00:40:30.513 Got JSON-RPC error response 00:40:30.513 response: 00:40:30.513 { 00:40:30.513 "code": -19, 00:40:30.513 "message": "No such device" 00:40:30.513 } 00:40:30.513 10:01:25 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:30.513 10:01:25 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:30.513 10:01:25 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:30.513 10:01:25 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:30.513 10:01:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:30.513 10:01:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:30.772 10:01:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:30.772 10:01:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:30.772 10:01:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:30.772 10:01:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:30.772 10:01:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:30.772 10:01:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:31.030 10:01:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2c6TqDVRiV 00:40:31.030 10:01:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:31.030 10:01:25 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:31.030 10:01:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2c6TqDVRiV 00:40:31.030 10:01:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2c6TqDVRiV 00:40:31.030 10:01:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2c6TqDVRiV 00:40:31.030 10:01:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2c6TqDVRiV 00:40:31.030 10:01:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2c6TqDVRiV 00:40:31.596 10:01:26 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:31.596 10:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:32.162 nvme0n1 00:40:32.162 10:01:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:32.162 10:01:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:32.162 10:01:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:32.162 10:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.162 10:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.162 10:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:32.420 10:01:27 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:32.420 10:01:27 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:32.420 10:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:32.677 10:01:27 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:32.677 10:01:27 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:32.677 10:01:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.677 10:01:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:32.677 10:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.937 10:01:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:32.937 10:01:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:32.937 10:01:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:32.937 10:01:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:32.937 10:01:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.937 10:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.937 10:01:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.195 10:01:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:33.195 10:01:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:33.195 10:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:33.770 10:01:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:33.770 10:01:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:33.770 10:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.034 10:01:28 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:34.034 10:01:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2c6TqDVRiV 00:40:34.034 10:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2c6TqDVRiV 00:40:34.292 10:01:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AldIWNQTfi 00:40:34.292 10:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AldIWNQTfi 00:40:34.550 10:01:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:34.550 10:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:34.808 nvme0n1 00:40:34.808 10:01:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:34.808 10:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:35.066 10:01:29 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:35.066 "subsystems": [ 00:40:35.066 { 00:40:35.066 "subsystem": "keyring", 00:40:35.066 "config": [ 00:40:35.066 { 00:40:35.066 "method": "keyring_file_add_key", 00:40:35.066 "params": { 00:40:35.066 "name": "key0", 00:40:35.066 "path": "/tmp/tmp.2c6TqDVRiV" 00:40:35.066 } 00:40:35.066 }, 00:40:35.066 { 00:40:35.066 "method": "keyring_file_add_key", 00:40:35.066 "params": { 00:40:35.066 "name": "key1", 00:40:35.066 "path": "/tmp/tmp.AldIWNQTfi" 00:40:35.066 } 00:40:35.066 } 00:40:35.066 ] 00:40:35.066 }, 00:40:35.066 { 00:40:35.066 "subsystem": "iobuf", 00:40:35.066 "config": [ 00:40:35.066 { 00:40:35.066 "method": "iobuf_set_options", 00:40:35.066 "params": { 00:40:35.066 "small_pool_count": 8192, 00:40:35.066 "large_pool_count": 1024, 00:40:35.066 "small_bufsize": 8192, 00:40:35.066 "large_bufsize": 135168 00:40:35.066 } 00:40:35.066 } 00:40:35.066 ] 00:40:35.066 }, 00:40:35.066 { 00:40:35.066 "subsystem": "sock", 00:40:35.066 "config": [ 00:40:35.066 { 00:40:35.066 "method": "sock_set_default_impl", 00:40:35.066 "params": { 00:40:35.066 "impl_name": "posix" 00:40:35.066 } 00:40:35.066 }, 00:40:35.066 { 00:40:35.066 "method": "sock_impl_set_options", 00:40:35.066 "params": { 00:40:35.066 "impl_name": "ssl", 00:40:35.066 "recv_buf_size": 4096, 00:40:35.066 "send_buf_size": 4096, 00:40:35.066 "enable_recv_pipe": true, 00:40:35.066 "enable_quickack": false, 00:40:35.066 "enable_placement_id": 0, 00:40:35.066 "enable_zerocopy_send_server": true, 00:40:35.066 "enable_zerocopy_send_client": false, 00:40:35.066 "zerocopy_threshold": 0, 00:40:35.066 "tls_version": 0, 00:40:35.066 "enable_ktls": false 00:40:35.066 } 00:40:35.066 }, 00:40:35.066 { 00:40:35.066 "method": "sock_impl_set_options", 00:40:35.066 "params": { 00:40:35.067 "impl_name": "posix", 00:40:35.067 "recv_buf_size": 2097152, 00:40:35.067 "send_buf_size": 2097152, 00:40:35.067 "enable_recv_pipe": true, 00:40:35.067 "enable_quickack": false, 00:40:35.067 "enable_placement_id": 0, 00:40:35.067 "enable_zerocopy_send_server": true, 00:40:35.067 "enable_zerocopy_send_client": false, 00:40:35.067 "zerocopy_threshold": 0, 00:40:35.067 "tls_version": 0, 00:40:35.067 "enable_ktls": false 00:40:35.067 } 00:40:35.067 } 00:40:35.067 ] 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "subsystem": "vmd", 00:40:35.067 "config": [] 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "subsystem": "accel", 00:40:35.067 "config": [ 00:40:35.067 { 00:40:35.067 "method": "accel_set_options", 00:40:35.067 "params": { 00:40:35.067 "small_cache_size": 128, 00:40:35.067 "large_cache_size": 16, 00:40:35.067 "task_count": 2048, 00:40:35.067 "sequence_count": 2048, 00:40:35.067 "buf_count": 2048 00:40:35.067 } 00:40:35.067 } 00:40:35.067 ] 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "subsystem": "bdev", 00:40:35.067 "config": [ 00:40:35.067 { 00:40:35.067 "method": "bdev_set_options", 00:40:35.067 "params": { 00:40:35.067 "bdev_io_pool_size": 65535, 00:40:35.067 "bdev_io_cache_size": 256, 00:40:35.067 "bdev_auto_examine": true, 00:40:35.067 "iobuf_small_cache_size": 128, 00:40:35.067 "iobuf_large_cache_size": 16 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_raid_set_options", 00:40:35.067 "params": { 00:40:35.067 "process_window_size_kb": 1024, 00:40:35.067 "process_max_bandwidth_mb_sec": 0 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_iscsi_set_options", 00:40:35.067 "params": { 00:40:35.067 "timeout_sec": 30 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_nvme_set_options", 00:40:35.067 "params": { 00:40:35.067 "action_on_timeout": "none", 00:40:35.067 "timeout_us": 0, 00:40:35.067 "timeout_admin_us": 0, 00:40:35.067 "keep_alive_timeout_ms": 10000, 00:40:35.067 "arbitration_burst": 0, 00:40:35.067 "low_priority_weight": 0, 00:40:35.067 "medium_priority_weight": 0, 00:40:35.067 "high_priority_weight": 0, 00:40:35.067 "nvme_adminq_poll_period_us": 10000, 00:40:35.067 "nvme_ioq_poll_period_us": 0, 00:40:35.067 "io_queue_requests": 512, 00:40:35.067 "delay_cmd_submit": true, 00:40:35.067 "transport_retry_count": 4, 00:40:35.067 "bdev_retry_count": 3, 00:40:35.067 "transport_ack_timeout": 0, 00:40:35.067 "ctrlr_loss_timeout_sec": 0, 00:40:35.067 "reconnect_delay_sec": 0, 00:40:35.067 "fast_io_fail_timeout_sec": 0, 00:40:35.067 "disable_auto_failback": false, 00:40:35.067 "generate_uuids": false, 00:40:35.067 "transport_tos": 0, 00:40:35.067 "nvme_error_stat": false, 00:40:35.067 "rdma_srq_size": 0, 00:40:35.067 "io_path_stat": false, 00:40:35.067 "allow_accel_sequence": false, 00:40:35.067 "rdma_max_cq_size": 0, 00:40:35.067 "rdma_cm_event_timeout_ms": 0, 00:40:35.067 "dhchap_digests": [ 00:40:35.067 "sha256", 00:40:35.067 "sha384", 00:40:35.067 "sha512" 00:40:35.067 ], 00:40:35.067 "dhchap_dhgroups": [ 00:40:35.067 "null", 00:40:35.067 "ffdhe2048", 00:40:35.067 "ffdhe3072", 00:40:35.067 "ffdhe4096", 00:40:35.067 "ffdhe6144", 00:40:35.067 "ffdhe8192" 00:40:35.067 ] 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_nvme_attach_controller", 00:40:35.067 "params": { 00:40:35.067 "name": "nvme0", 00:40:35.067 "trtype": "TCP", 00:40:35.067 "adrfam": "IPv4", 00:40:35.067 "traddr": "127.0.0.1", 00:40:35.067 "trsvcid": "4420", 00:40:35.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:35.067 "prchk_reftag": false, 00:40:35.067 "prchk_guard": false, 00:40:35.067 "ctrlr_loss_timeout_sec": 0, 00:40:35.067 "reconnect_delay_sec": 0, 00:40:35.067 "fast_io_fail_timeout_sec": 0, 00:40:35.067 "psk": "key0", 00:40:35.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:35.067 "hdgst": false, 00:40:35.067 "ddgst": false 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_nvme_set_hotplug", 00:40:35.067 "params": { 00:40:35.067 "period_us": 100000, 00:40:35.067 "enable": false 00:40:35.067 } 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "method": "bdev_wait_for_examine" 00:40:35.067 } 00:40:35.067 ] 00:40:35.067 }, 00:40:35.067 { 00:40:35.067 "subsystem": "nbd", 00:40:35.067 "config": [] 00:40:35.067 } 00:40:35.067 ] 00:40:35.067 }' 00:40:35.067 10:01:29 keyring_file -- keyring/file.sh@115 -- # killprocess 1749839 00:40:35.067 10:01:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1749839 ']' 00:40:35.067 10:01:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1749839 00:40:35.067 10:01:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:35.067 10:01:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:35.067 10:01:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1749839 00:40:35.326 10:01:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:35.326 10:01:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:35.326 10:01:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1749839' 00:40:35.326 killing process with pid 1749839 00:40:35.326 10:01:29 keyring_file -- common/autotest_common.sh@969 -- # kill 1749839 00:40:35.326 Received shutdown signal, test time was about 1.000000 seconds 00:40:35.326 00:40:35.326 Latency(us) 00:40:35.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.326 =================================================================================================================== 00:40:35.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:35.326 10:01:29 keyring_file -- common/autotest_common.sh@974 -- # wait 1749839 00:40:35.585 10:01:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=1751766 00:40:35.585 10:01:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1751766 /var/tmp/bperf.sock 00:40:35.585 10:01:30 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1751766 ']' 00:40:35.585 10:01:30 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:35.585 10:01:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:35.585 10:01:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:35.585 "subsystems": [ 00:40:35.585 { 00:40:35.585 "subsystem": "keyring", 00:40:35.585 "config": [ 00:40:35.585 { 00:40:35.585 "method": "keyring_file_add_key", 00:40:35.585 "params": { 00:40:35.585 "name": "key0", 00:40:35.585 "path": "/tmp/tmp.2c6TqDVRiV" 00:40:35.585 } 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "method": "keyring_file_add_key", 00:40:35.585 "params": { 00:40:35.585 "name": "key1", 00:40:35.585 "path": "/tmp/tmp.AldIWNQTfi" 00:40:35.585 } 00:40:35.585 } 00:40:35.585 ] 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "subsystem": "iobuf", 00:40:35.585 "config": [ 00:40:35.585 { 00:40:35.585 "method": "iobuf_set_options", 00:40:35.585 "params": { 00:40:35.585 "small_pool_count": 8192, 00:40:35.585 "large_pool_count": 1024, 00:40:35.585 "small_bufsize": 8192, 00:40:35.585 "large_bufsize": 135168 00:40:35.585 } 00:40:35.585 } 00:40:35.585 ] 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "subsystem": "sock", 00:40:35.585 "config": [ 00:40:35.585 { 00:40:35.585 "method": "sock_set_default_impl", 00:40:35.585 "params": { 00:40:35.585 "impl_name": "posix" 00:40:35.585 } 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "method": "sock_impl_set_options", 00:40:35.585 "params": { 00:40:35.585 "impl_name": "ssl", 00:40:35.585 "recv_buf_size": 4096, 00:40:35.585 "send_buf_size": 4096, 00:40:35.585 "enable_recv_pipe": true, 00:40:35.585 "enable_quickack": false, 00:40:35.585 "enable_placement_id": 0, 00:40:35.585 "enable_zerocopy_send_server": true, 00:40:35.585 "enable_zerocopy_send_client": false, 00:40:35.585 "zerocopy_threshold": 0, 00:40:35.585 "tls_version": 0, 00:40:35.585 "enable_ktls": false 00:40:35.585 } 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "method": "sock_impl_set_options", 00:40:35.585 "params": { 00:40:35.585 "impl_name": "posix", 00:40:35.585 "recv_buf_size": 2097152, 00:40:35.585 "send_buf_size": 2097152, 00:40:35.585 "enable_recv_pipe": true, 00:40:35.585 "enable_quickack": false, 00:40:35.585 "enable_placement_id": 0, 00:40:35.585 "enable_zerocopy_send_server": true, 00:40:35.585 "enable_zerocopy_send_client": false, 00:40:35.585 "zerocopy_threshold": 0, 00:40:35.585 "tls_version": 0, 00:40:35.585 "enable_ktls": false 00:40:35.585 } 00:40:35.585 } 00:40:35.585 ] 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "subsystem": "vmd", 00:40:35.585 "config": [] 00:40:35.585 }, 00:40:35.585 { 00:40:35.585 "subsystem": "accel", 00:40:35.585 "config": [ 00:40:35.585 { 00:40:35.585 "method": "accel_set_options", 00:40:35.585 "params": { 00:40:35.585 "small_cache_size": 128, 00:40:35.585 "large_cache_size": 16, 00:40:35.585 "task_count": 2048, 00:40:35.586 "sequence_count": 2048, 00:40:35.586 "buf_count": 2048 00:40:35.586 } 00:40:35.586 } 00:40:35.586 ] 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "subsystem": "bdev", 00:40:35.586 "config": [ 00:40:35.586 { 00:40:35.586 "method": "bdev_set_options", 00:40:35.586 "params": { 00:40:35.586 "bdev_io_pool_size": 65535, 00:40:35.586 "bdev_io_cache_size": 256, 00:40:35.586 "bdev_auto_examine": true, 00:40:35.586 "iobuf_small_cache_size": 128, 00:40:35.586 "iobuf_large_cache_size": 16 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_raid_set_options", 00:40:35.586 "params": { 00:40:35.586 "process_window_size_kb": 1024, 00:40:35.586 "process_max_bandwidth_mb_sec": 0 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_iscsi_set_options", 00:40:35.586 "params": { 00:40:35.586 "timeout_sec": 30 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_nvme_set_options", 00:40:35.586 "params": { 00:40:35.586 "action_on_timeout": "none", 00:40:35.586 "timeout_us": 0, 00:40:35.586 "timeout_admin_us": 0, 00:40:35.586 "keep_alive_timeout_ms": 10000, 00:40:35.586 "arbitration_burst": 0, 00:40:35.586 "low_priority_weight": 0, 00:40:35.586 "medium_priority_weight": 0, 00:40:35.586 "high_priority_weight": 0, 00:40:35.586 "nvme_adminq_poll_period_us": 10000, 00:40:35.586 "nvme_ioq_poll_period_us": 0, 00:40:35.586 "io_queue_requests": 512, 00:40:35.586 "delay_cmd_submit": true, 00:40:35.586 "transport_retry_count": 4, 00:40:35.586 "bdev_retry_count": 3, 00:40:35.586 "transport_ack_timeout": 0, 00:40:35.586 "ctrlr_loss_timeout_sec": 0, 00:40:35.586 "reconnect_delay_sec": 0, 00:40:35.586 "fast_io_fail_timeout_sec": 0, 00:40:35.586 "disable_auto_failback": false, 00:40:35.586 "generate_uuids": false, 00:40:35.586 "transport_tos": 0, 00:40:35.586 "nvme_error_stat": false, 00:40:35.586 "rdma_srq_size": 0, 00:40:35.586 "io_path_stat": false, 00:40:35.586 "allow_accel_sequence": false, 00:40:35.586 "rdma_max_cq_size": 0, 00:40:35.586 "rdma_cm_event_timeout_ms": 0, 00:40:35.586 "dhchap_digests": [ 00:40:35.586 "sha256", 00:40:35.586 "sha384", 00:40:35.586 "sha512" 00:40:35.586 ], 00:40:35.586 "dhchap_dhgroups": [ 00:40:35.586 "null", 00:40:35.586 "ffdhe2048", 00:40:35.586 "ffdhe3072", 00:40:35.586 "ffdhe4096", 00:40:35.586 "ffdhe6144", 00:40:35.586 "ffdhe8192" 00:40:35.586 ] 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_nvme_attach_controller", 00:40:35.586 "params": { 00:40:35.586 "name": "nvme0", 00:40:35.586 "trtype": "TCP", 00:40:35.586 "adrfam": "IPv4", 00:40:35.586 "traddr": "127.0.0.1", 00:40:35.586 "trsvcid": "4420", 00:40:35.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:35.586 "prchk_reftag": false, 00:40:35.586 "prchk_guard": false, 00:40:35.586 "ctrlr_loss_timeout_sec": 0, 00:40:35.586 "reconnect_delay_sec": 0, 00:40:35.586 "fast_io_fail_timeout_sec": 0, 00:40:35.586 "psk": "key0", 00:40:35.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:35.586 "hdgst": false, 00:40:35.586 "ddgst": false 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_nvme_set_hotplug", 00:40:35.586 "params": { 00:40:35.586 "period_us": 100000, 00:40:35.586 "enable": false 00:40:35.586 } 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "method": "bdev_wait_for_examine" 00:40:35.586 } 00:40:35.586 ] 00:40:35.586 }, 00:40:35.586 { 00:40:35.586 "subsystem": "nbd", 00:40:35.586 "config": [] 00:40:35.586 } 00:40:35.586 ] 00:40:35.586 }' 00:40:35.586 10:01:30 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:35.586 10:01:30 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:35.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:35.586 10:01:30 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:35.586 10:01:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:35.586 [2024-10-07 10:01:30.310068] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:40:35.586 [2024-10-07 10:01:30.310200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751766 ] 00:40:35.845 [2024-10-07 10:01:30.404016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.845 [2024-10-07 10:01:30.529831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.103 [2024-10-07 10:01:30.726028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:37.038 10:01:31 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:37.038 10:01:31 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:40:37.038 10:01:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:37.038 10:01:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:37.038 10:01:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.296 10:01:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:37.296 10:01:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:37.296 10:01:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:37.296 10:01:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.296 10:01:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.296 10:01:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.296 10:01:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:37.554 10:01:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:37.554 10:01:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:37.554 10:01:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:37.554 10:01:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.554 10:01:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.554 10:01:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:37.554 10:01:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.120 10:01:32 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:38.120 10:01:32 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:38.120 10:01:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:38.120 10:01:32 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:38.378 10:01:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:38.378 10:01:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:38.378 10:01:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2c6TqDVRiV /tmp/tmp.AldIWNQTfi 00:40:38.378 10:01:33 keyring_file -- keyring/file.sh@20 -- # killprocess 1751766 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1751766 ']' 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1751766 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1751766 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1751766' 00:40:38.378 killing process with pid 1751766 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@969 -- # kill 1751766 00:40:38.378 Received shutdown signal, test time was about 1.000000 seconds 00:40:38.378 00:40:38.378 Latency(us) 00:40:38.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:38.378 =================================================================================================================== 00:40:38.378 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:38.378 10:01:33 keyring_file -- common/autotest_common.sh@974 -- # wait 1751766 00:40:38.636 10:01:33 keyring_file -- keyring/file.sh@21 -- # killprocess 1749714 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1749714 ']' 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1749714 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1749714 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1749714' 00:40:38.636 killing process with pid 1749714 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@969 -- # kill 1749714 00:40:38.636 10:01:33 keyring_file -- common/autotest_common.sh@974 -- # wait 1749714 00:40:39.201 00:40:39.201 real 0m21.024s 00:40:39.201 user 0m54.515s 00:40:39.201 sys 0m4.086s 00:40:39.201 10:01:33 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:39.201 10:01:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:39.201 ************************************ 00:40:39.201 END TEST keyring_file 00:40:39.201 ************************************ 00:40:39.201 10:01:33 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:40:39.201 10:01:33 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:39.201 10:01:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:39.201 10:01:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:39.201 10:01:33 -- common/autotest_common.sh@10 -- # set +x 00:40:39.201 ************************************ 00:40:39.201 START TEST keyring_linux 00:40:39.201 ************************************ 00:40:39.201 10:01:33 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:39.201 Joined session keyring: 887821632 00:40:39.461 * Looking for test storage... 00:40:39.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:39.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.461 --rc genhtml_branch_coverage=1 00:40:39.461 --rc genhtml_function_coverage=1 00:40:39.461 --rc genhtml_legend=1 00:40:39.461 --rc geninfo_all_blocks=1 00:40:39.461 --rc geninfo_unexecuted_blocks=1 00:40:39.461 00:40:39.461 ' 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:39.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.461 --rc genhtml_branch_coverage=1 00:40:39.461 --rc genhtml_function_coverage=1 00:40:39.461 --rc genhtml_legend=1 00:40:39.461 --rc geninfo_all_blocks=1 00:40:39.461 --rc geninfo_unexecuted_blocks=1 00:40:39.461 00:40:39.461 ' 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:39.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.461 --rc genhtml_branch_coverage=1 00:40:39.461 --rc genhtml_function_coverage=1 00:40:39.461 --rc genhtml_legend=1 00:40:39.461 --rc geninfo_all_blocks=1 00:40:39.461 --rc geninfo_unexecuted_blocks=1 00:40:39.461 00:40:39.461 ' 00:40:39.461 10:01:34 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:39.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.461 --rc genhtml_branch_coverage=1 00:40:39.461 --rc genhtml_function_coverage=1 00:40:39.461 --rc genhtml_legend=1 00:40:39.461 --rc geninfo_all_blocks=1 00:40:39.461 --rc geninfo_unexecuted_blocks=1 00:40:39.461 00:40:39.461 ' 00:40:39.461 10:01:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:39.461 10:01:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:39.461 10:01:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:39.461 10:01:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:39.720 10:01:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.720 10:01:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.720 10:01:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.721 10:01:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.721 10:01:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.721 10:01:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.721 10:01:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:39.721 10:01:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:39.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:39.721 /tmp/:spdk-test:key0 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:40:39.721 10:01:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:39.721 10:01:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:39.721 /tmp/:spdk-test:key1 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1752342 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1752342 00:40:39.721 10:01:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1752342 ']' 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:39.721 10:01:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.721 [2024-10-07 10:01:34.516383] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:40:39.721 [2024-10-07 10:01:34.516558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752342 ] 00:40:39.980 [2024-10-07 10:01:34.608733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.980 [2024-10-07 10:01:34.732952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:40.547 [2024-10-07 10:01:35.078473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.547 null0 00:40:40.547 [2024-10-07 10:01:35.110529] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:40.547 [2024-10-07 10:01:35.111125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:40.547 937108135 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:40.547 908976555 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1752469 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:40.547 10:01:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1752469 /var/tmp/bperf.sock 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1752469 ']' 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:40.547 10:01:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:40.547 [2024-10-07 10:01:35.186423] Starting SPDK v25.01-pre git sha1 3d8f4fe53 / DPDK 24.03.0 initialization... 00:40:40.547 [2024-10-07 10:01:35.186499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752469 ] 00:40:40.547 [2024-10-07 10:01:35.252324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.806 [2024-10-07 10:01:35.374104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.806 10:01:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:40.806 10:01:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:40:40.806 10:01:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:40.806 10:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:41.373 10:01:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:41.373 10:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:41.940 10:01:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:41.941 10:01:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:42.199 [2024-10-07 10:01:36.934200] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:42.199 nvme0n1 00:40:42.458 10:01:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:42.458 10:01:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:42.458 10:01:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:42.458 10:01:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:42.458 10:01:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:42.458 10:01:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.025 10:01:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:43.025 10:01:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:43.025 10:01:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:43.025 10:01:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:43.025 10:01:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:43.026 10:01:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:43.026 10:01:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@25 -- # sn=937108135 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 937108135 == \9\3\7\1\0\8\1\3\5 ]] 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 937108135 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:43.592 10:01:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:43.592 Running I/O for 1 seconds... 00:40:44.965 9285.00 IOPS, 36.27 MiB/s 00:40:44.965 Latency(us) 00:40:44.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.965 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:44.965 nvme0n1 : 1.01 9299.99 36.33 0.00 0.00 13670.75 5534.15 19320.98 00:40:44.965 =================================================================================================================== 00:40:44.965 Total : 9299.99 36.33 0.00 0.00 13670.75 5534.15 19320.98 00:40:44.965 { 00:40:44.965 "results": [ 00:40:44.965 { 00:40:44.965 "job": "nvme0n1", 00:40:44.965 "core_mask": "0x2", 00:40:44.965 "workload": "randread", 00:40:44.965 "status": "finished", 00:40:44.965 "queue_depth": 128, 00:40:44.965 "io_size": 4096, 00:40:44.965 "runtime": 1.012259, 00:40:44.965 "iops": 9299.991405361672, 00:40:44.965 "mibps": 36.32809142719403, 00:40:44.965 "io_failed": 0, 00:40:44.965 "io_timeout": 0, 00:40:44.965 "avg_latency_us": 13670.746224142136, 00:40:44.966 "min_latency_us": 5534.151111111111, 00:40:44.966 "max_latency_us": 19320.983703703703 00:40:44.966 } 00:40:44.966 ], 00:40:44.966 "core_count": 1 00:40:44.966 } 00:40:44.966 10:01:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:44.966 10:01:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:45.224 10:01:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:45.224 10:01:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:45.224 10:01:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:45.224 10:01:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:45.224 10:01:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:45.224 10:01:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:45.790 10:01:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:45.790 10:01:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:45.790 10:01:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:45.790 10:01:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:45.790 10:01:40 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:45.790 10:01:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:46.049 [2024-10-07 10:01:40.711524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:46.049 [2024-10-07 10:01:40.712113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d78850 (107): Transport endpoint is not connected 00:40:46.049 [2024-10-07 10:01:40.713103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d78850 (9): Bad file descriptor 00:40:46.049 [2024-10-07 10:01:40.714102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:46.049 [2024-10-07 10:01:40.714122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:46.049 [2024-10-07 10:01:40.714135] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:46.049 [2024-10-07 10:01:40.714149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:46.049 request: 00:40:46.049 { 00:40:46.049 "name": "nvme0", 00:40:46.049 "trtype": "tcp", 00:40:46.049 "traddr": "127.0.0.1", 00:40:46.049 "adrfam": "ipv4", 00:40:46.049 "trsvcid": "4420", 00:40:46.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:46.049 "prchk_reftag": false, 00:40:46.049 "prchk_guard": false, 00:40:46.049 "hdgst": false, 00:40:46.049 "ddgst": false, 00:40:46.049 "psk": ":spdk-test:key1", 00:40:46.049 "allow_unrecognized_csi": false, 00:40:46.049 "method": "bdev_nvme_attach_controller", 00:40:46.049 "req_id": 1 00:40:46.049 } 00:40:46.049 Got JSON-RPC error response 00:40:46.049 response: 00:40:46.049 { 00:40:46.049 "code": -5, 00:40:46.049 "message": "Input/output error" 00:40:46.049 } 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@33 -- # sn=937108135 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 937108135 00:40:46.049 1 links removed 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@33 -- # sn=908976555 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 908976555 00:40:46.049 1 links removed 00:40:46.049 10:01:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1752469 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1752469 ']' 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1752469 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1752469 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1752469' 00:40:46.049 killing process with pid 1752469 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 1752469 00:40:46.049 Received shutdown signal, test time was about 1.000000 seconds 00:40:46.049 00:40:46.049 Latency(us) 00:40:46.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.049 =================================================================================================================== 00:40:46.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:46.049 10:01:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 1752469 00:40:46.307 10:01:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1752342 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1752342 ']' 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1752342 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1752342 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1752342' 00:40:46.307 killing process with pid 1752342 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@969 -- # kill 1752342 00:40:46.307 10:01:41 keyring_linux -- common/autotest_common.sh@974 -- # wait 1752342 00:40:46.875 00:40:46.875 real 0m7.622s 00:40:46.875 user 0m16.400s 00:40:46.875 sys 0m2.049s 00:40:46.875 10:01:41 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.875 10:01:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:46.875 ************************************ 00:40:46.875 END TEST keyring_linux 00:40:46.875 ************************************ 00:40:46.875 10:01:41 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:46.875 10:01:41 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:46.875 10:01:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:46.875 10:01:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:46.875 10:01:41 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:46.875 10:01:41 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:46.875 10:01:41 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:46.875 10:01:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:46.875 10:01:41 -- common/autotest_common.sh@10 -- # set +x 00:40:46.875 10:01:41 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:46.875 10:01:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:46.875 10:01:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:46.875 10:01:41 -- common/autotest_common.sh@10 -- # set +x 00:40:49.414 INFO: APP EXITING 00:40:49.414 INFO: killing all VMs 00:40:49.414 INFO: killing vhost app 00:40:49.414 INFO: EXIT DONE 00:40:50.789 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:40:50.789 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.789 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.789 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.789 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.789 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.789 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.789 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.789 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:50.789 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.789 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.789 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.789 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.789 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.789 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.789 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.789 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:52.693 Cleaning 00:40:52.693 Removing: /var/run/dpdk/spdk0/config 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:52.693 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:52.693 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:52.693 Removing: /var/run/dpdk/spdk1/config 00:40:52.693 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:52.694 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:52.694 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:52.694 Removing: /var/run/dpdk/spdk2/config 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:52.694 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:52.694 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:52.694 Removing: /var/run/dpdk/spdk3/config 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:52.694 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:52.694 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:52.694 Removing: /var/run/dpdk/spdk4/config 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:52.694 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:52.694 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:52.694 Removing: /dev/shm/bdev_svc_trace.1 00:40:52.694 Removing: /dev/shm/nvmf_trace.0 00:40:52.694 Removing: /dev/shm/spdk_tgt_trace.pid1405673 00:40:52.694 Removing: /var/run/dpdk/spdk0 00:40:52.694 Removing: /var/run/dpdk/spdk1 00:40:52.694 Removing: /var/run/dpdk/spdk2 00:40:52.694 Removing: /var/run/dpdk/spdk3 00:40:52.694 Removing: /var/run/dpdk/spdk4 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1403976 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1404747 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1405673 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1406186 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1406813 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1406968 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1407671 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1407798 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1408073 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1409652 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1410727 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1411050 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1411377 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1411718 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1411921 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1412197 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1412357 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1412551 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1413135 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1416356 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1416575 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1416740 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1416869 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1417298 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1417313 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1417866 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1417887 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1418199 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1418303 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1418479 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1418604 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1419110 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1419264 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1419594 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1421989 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1425021 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1432811 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1433304 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1435855 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1436133 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1438935 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1442798 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1445128 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1452105 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1457551 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1459433 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1460102 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1471238 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1473644 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1502409 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1505754 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1509966 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1514369 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1514371 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1514912 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1515554 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1516175 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1516558 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1516611 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1516814 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1517018 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1517020 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1517676 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1518206 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1518866 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1519261 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1519264 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1519525 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1520859 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1521684 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1527524 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1563108 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1566564 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1567736 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1569061 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1569208 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1569356 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1569495 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1570193 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1571736 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1573280 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1573822 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1575574 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1576004 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1576687 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1579225 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1582663 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1582664 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1582665 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1584984 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1589902 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1592544 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1596329 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1597271 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1598370 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1599577 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1602542 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1605015 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1609967 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1610070 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1613129 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1613395 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1613534 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1613798 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1613803 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1616839 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1617175 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1620111 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1621965 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1625913 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1629508 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1637598 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1642845 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1642847 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1656488 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1657020 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1657551 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1658036 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1658674 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1659212 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1659744 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1660276 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1662858 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1663067 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1666890 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1667069 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1670575 00:40:52.694 Removing: /var/run/dpdk/spdk_pid1673446 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1681251 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1681643 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1684174 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1684443 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1687221 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1691303 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1693986 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1701034 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1707142 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1708318 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1708981 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1719845 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1722238 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1724140 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1729306 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1729311 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1732359 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1733757 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1735161 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1735900 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1738013 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1738796 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1744239 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1744617 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1745005 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1746569 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1746964 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1747249 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1749714 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1749839 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1751766 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1752342 00:40:52.955 Removing: /var/run/dpdk/spdk_pid1752469 00:40:52.955 Clean 00:40:52.955 10:01:47 -- common/autotest_common.sh@1451 -- # return 0 00:40:52.955 10:01:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:52.955 10:01:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:52.955 10:01:47 -- common/autotest_common.sh@10 -- # set +x 00:40:52.955 10:01:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:52.955 10:01:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:52.955 10:01:47 -- common/autotest_common.sh@10 -- # set +x 00:40:52.955 10:01:47 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:52.955 10:01:47 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:52.955 10:01:47 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:52.955 10:01:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:52.955 10:01:47 -- spdk/autotest.sh@394 -- # hostname 00:40:52.955 10:01:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:53.522 geninfo: WARNING: invalid characters removed from testname! 00:42:01.219 10:02:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:05.440 10:02:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:13.561 10:03:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:16.091 10:03:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:26.074 10:03:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:32.646 10:03:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:40.768 10:03:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:40.768 10:03:35 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:42:40.768 10:03:35 -- common/autotest_common.sh@1681 -- $ lcov --version 00:42:40.768 10:03:35 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:42:41.029 10:03:35 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:42:41.029 10:03:35 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:42:41.029 10:03:35 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:42:41.029 10:03:35 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:42:41.029 10:03:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:42:41.029 10:03:35 -- scripts/common.sh@336 -- $ read -ra ver1 00:42:41.029 10:03:35 -- scripts/common.sh@337 -- $ IFS=.-: 00:42:41.029 10:03:35 -- scripts/common.sh@337 -- $ read -ra ver2 00:42:41.029 10:03:35 -- scripts/common.sh@338 -- $ local 'op=<' 00:42:41.029 10:03:35 -- scripts/common.sh@340 -- $ ver1_l=2 00:42:41.029 10:03:35 -- scripts/common.sh@341 -- $ ver2_l=1 00:42:41.029 10:03:35 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:42:41.029 10:03:35 -- scripts/common.sh@344 -- $ case "$op" in 00:42:41.029 10:03:35 -- scripts/common.sh@345 -- $ : 1 00:42:41.029 10:03:35 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:42:41.029 10:03:35 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:41.029 10:03:35 -- scripts/common.sh@365 -- $ decimal 1 00:42:41.029 10:03:35 -- scripts/common.sh@353 -- $ local d=1 00:42:41.029 10:03:35 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:42:41.029 10:03:35 -- scripts/common.sh@355 -- $ echo 1 00:42:41.029 10:03:35 -- scripts/common.sh@365 -- $ ver1[v]=1 00:42:41.029 10:03:35 -- scripts/common.sh@366 -- $ decimal 2 00:42:41.029 10:03:35 -- scripts/common.sh@353 -- $ local d=2 00:42:41.029 10:03:35 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:42:41.029 10:03:35 -- scripts/common.sh@355 -- $ echo 2 00:42:41.029 10:03:35 -- scripts/common.sh@366 -- $ ver2[v]=2 00:42:41.029 10:03:35 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:42:41.029 10:03:35 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:42:41.029 10:03:35 -- scripts/common.sh@368 -- $ return 0 00:42:41.029 10:03:35 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:41.029 10:03:35 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:42:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.029 --rc genhtml_branch_coverage=1 00:42:41.029 --rc genhtml_function_coverage=1 00:42:41.029 --rc genhtml_legend=1 00:42:41.029 --rc geninfo_all_blocks=1 00:42:41.029 --rc geninfo_unexecuted_blocks=1 00:42:41.029 00:42:41.029 ' 00:42:41.029 10:03:35 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:42:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.029 --rc genhtml_branch_coverage=1 00:42:41.029 --rc genhtml_function_coverage=1 00:42:41.029 --rc genhtml_legend=1 00:42:41.029 --rc geninfo_all_blocks=1 00:42:41.029 --rc geninfo_unexecuted_blocks=1 00:42:41.029 00:42:41.029 ' 00:42:41.029 10:03:35 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:42:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.029 --rc genhtml_branch_coverage=1 00:42:41.029 --rc genhtml_function_coverage=1 00:42:41.029 --rc genhtml_legend=1 00:42:41.029 --rc geninfo_all_blocks=1 00:42:41.029 --rc geninfo_unexecuted_blocks=1 00:42:41.029 00:42:41.029 ' 00:42:41.029 10:03:35 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:42:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:41.029 --rc genhtml_branch_coverage=1 00:42:41.029 --rc genhtml_function_coverage=1 00:42:41.029 --rc genhtml_legend=1 00:42:41.029 --rc geninfo_all_blocks=1 00:42:41.029 --rc geninfo_unexecuted_blocks=1 00:42:41.029 00:42:41.029 ' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:41.029 10:03:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:42:41.029 10:03:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:41.029 10:03:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:41.029 10:03:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:41.029 10:03:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.029 10:03:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.029 10:03:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.029 10:03:35 -- paths/export.sh@5 -- $ export PATH 00:42:41.029 10:03:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:41.029 10:03:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:42:41.029 10:03:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:42:41.029 10:03:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728288215.XXXXXX 00:42:41.029 10:03:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728288215.fBLk8K 00:42:41.029 10:03:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:42:41.029 10:03:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:42:41.029 10:03:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:42:41.029 10:03:35 -- common/autotest_common.sh@10 -- $ set +x 00:42:41.029 10:03:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:42:41.029 10:03:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:42:41.029 10:03:35 -- pm/common@17 -- $ local monitor 00:42:41.029 10:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.029 10:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.029 10:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.029 10:03:35 -- pm/common@21 -- $ date +%s 00:42:41.029 10:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.029 10:03:35 -- pm/common@21 -- $ date +%s 00:42:41.029 10:03:35 -- pm/common@25 -- $ sleep 1 00:42:41.029 10:03:35 -- pm/common@21 -- $ date +%s 00:42:41.029 10:03:35 -- pm/common@21 -- $ date +%s 00:42:41.029 10:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288215 00:42:41.029 10:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288215 00:42:41.029 10:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288215 00:42:41.029 10:03:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288215 00:42:41.029 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288215_collect-cpu-load.pm.log 00:42:41.029 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288215_collect-vmstat.pm.log 00:42:41.029 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288215_collect-cpu-temp.pm.log 00:42:41.029 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288215_collect-bmc-pm.bmc.pm.log 00:42:41.968 10:03:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:42:41.968 10:03:36 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:42:41.968 10:03:36 -- spdk/autopackage.sh@14 -- $ timing_finish 00:42:41.968 10:03:36 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:41.968 10:03:36 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:41.968 10:03:36 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:41.968 10:03:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:41.968 10:03:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:41.968 10:03:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:41.968 10:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.968 10:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:41.968 10:03:36 -- pm/common@44 -- $ pid=1764783 00:42:41.968 10:03:36 -- pm/common@50 -- $ kill -TERM 1764783 00:42:41.968 10:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.968 10:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:41.968 10:03:36 -- pm/common@44 -- $ pid=1764785 00:42:41.968 10:03:36 -- pm/common@50 -- $ kill -TERM 1764785 00:42:41.968 10:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.968 10:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:41.968 10:03:36 -- pm/common@44 -- $ pid=1764787 00:42:41.968 10:03:36 -- pm/common@50 -- $ kill -TERM 1764787 00:42:41.968 10:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:41.968 10:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:41.968 10:03:36 -- pm/common@44 -- $ pid=1764818 00:42:41.968 10:03:36 -- pm/common@50 -- $ sudo -E kill -TERM 1764818 00:42:41.968 + [[ -n 1325863 ]] 00:42:41.968 + sudo kill 1325863 00:42:42.234 Pausing (Preparing for shutdown) 01:03:09.339 Resuming build at Mon Oct 07 08:24:04 UTC 2024 after Jenkins restart 01:03:22.842 Waiting for reconnection of GP8 before proceeding with build 01:03:23.063 Timeout expired 3.5 sec ago 01:03:23.063 Cancelling nested steps due to timeout 01:03:23.073 Ready to run at Mon Oct 07 08:24:17 UTC 2024 01:03:23.078 [Pipeline] } 01:03:23.109 [Pipeline] // stage 01:03:23.118 [Pipeline] } 01:03:23.134 [Pipeline] // timeout 01:03:23.142 [Pipeline] } 01:03:23.147 Timeout has been exceeded 01:03:23.147 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 8dcff330-fd76-4798-a593-a0ba27c5bd73 01:03:23.147 Setting overall build result to ABORTED 01:03:23.161 [Pipeline] // catchError 01:03:23.169 [Pipeline] } 01:03:23.194 [Pipeline] // wrap 01:03:23.197 [Pipeline] } 01:03:23.204 [Pipeline] // catchError 01:03:23.217 [Pipeline] stage 01:03:23.220 [Pipeline] { (Epilogue) 01:03:23.232 [Pipeline] catchError 01:03:23.234 [Pipeline] { 01:03:23.245 [Pipeline] echo 01:03:23.246 Cleanup processes 01:03:23.250 [Pipeline] sh 01:03:24.030 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:24.030 1769772 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:24.084 [Pipeline] sh 01:03:24.375 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:24.375 ++ grep -v 'sudo pgrep' 01:03:24.375 ++ awk '{print $1}' 01:03:24.375 + sudo kill -9 01:03:24.375 + true 01:03:24.385 [Pipeline] sh 01:03:24.668 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:51.279 [Pipeline] sh 01:03:51.563 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:51.821 Artifacts sizes are good 01:03:51.834 [Pipeline] archiveArtifacts 01:03:51.840 Archiving artifacts 01:03:52.362 [Pipeline] sh 01:03:52.650 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:03:52.664 [Pipeline] cleanWs 01:03:52.672 [WS-CLEANUP] Deleting project workspace... 01:03:52.672 [WS-CLEANUP] Deferred wipeout is used... 01:03:52.682 [WS-CLEANUP] done 01:03:52.684 [Pipeline] } 01:03:52.698 [Pipeline] // catchError 01:03:52.706 [Pipeline] echo 01:03:52.707 Tests finished with errors. Please check the logs for more info. 01:03:52.710 [Pipeline] echo 01:03:52.712 Execution node will be rebooted. 01:03:52.727 [Pipeline] build 01:03:52.731 Scheduling project: reset-job 01:03:52.745 [Pipeline] sh 01:03:53.035 + logger -p user.info -t JENKINS-CI 01:03:53.045 [Pipeline] } 01:03:53.057 [Pipeline] // stage 01:03:53.062 [Pipeline] } 01:03:53.073 [Pipeline] // node 01:03:53.078 [Pipeline] End of Pipeline 01:03:53.097 Finished: ABORTED